Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

This article was downloaded by: [Universite Laval]

On: 26 December 2014, At: 19:13


Publisher: Routledge
Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered
office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

Assessment & Evaluation in Higher


Education
Publication details, including instructions for authors and
subscription information:
http://www.tandfonline.com/loi/caeh20

Oral versus written assessments: a test


of student performance and attitudes
a b c
Mark Huxham , Fiona Campbell & Jenny Westwood
a
School of Life, Sport and Social Sciences , Edinburgh Napier
University , Edinburgh, EH10 5DT, UK
b
Academic Development , Edinburgh Napier University ,
Edinburgh, EH10 5DT, UK
c
Student Affairs , Edinburgh Napier University , Edinburgh, EH10
5DT, UK
Published online: 18 Nov 2010.

To cite this article: Mark Huxham , Fiona Campbell & Jenny Westwood (2012) Oral versus written
assessments: a test of student performance and attitudes, Assessment & Evaluation in Higher
Education, 37:1, 125-136, DOI: 10.1080/02602938.2010.515012

To link to this article: http://dx.doi.org/10.1080/02602938.2010.515012

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all the information (the
“Content”) contained in the publications on our platform. However, Taylor & Francis,
our agents, and our licensors make no representations or warranties whatsoever as to
the accuracy, completeness, or suitability for any purpose of the Content. Any opinions
and views expressed in this publication are the opinions and views of the authors,
and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content
should not be relied upon and should be independently verified with primary sources
of information. Taylor and Francis shall not be liable for any losses, actions, claims,
proceedings, demands, costs, expenses, damages, and other liabilities whatsoever
or howsoever caused arising directly or indirectly in connection with, in relation to or
arising out of the use of the Content.

This article may be used for research, teaching, and private study purposes. Any
substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,
systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &
Conditions of access and use can be found at http://www.tandfonline.com/page/terms-
and-conditions
Downloaded by [Universite Laval] at 19:13 26 December 2014
Assessment & Evaluation in Higher Education
Vol. 37, No. 1, February 2012, 125–136

Oral versus written assessments: a test of student performance


and attitudes
Mark Huxhama*, Fiona Campbellb and Jenny Westwoodc
a
School of Life, Sport and Social Sciences, Edinburgh Napier University, Edinburgh EH10
5DT, UK; bAcademic Development, Edinburgh Napier University, Edinburgh EH10 5DT, UK;
cStudent Affairs, Edinburgh Napier University, Edinburgh EH10 5DT, UK
Assessment
10.1080/02602938.2010.515012
CAEH_A_515012.sgm
0260-2938
Original
Taylor
02010
00
m.huxham@napier.ac.uk
MarkHuxham
000002010
&
and
Article
Francis
(print)/1469-297X
Francis
& Evaluation in Higher
(online)
Education

Student performance in and attitudes towards oral and written assessments were
Downloaded by [Universite Laval] at 19:13 26 December 2014

compared using quantitative and qualitative methods. Two separate cohorts of


students were examined. The first larger cohort of students (n = 99) was
randomly divided into ‘oral’ and ‘written’ groups, and the marks that they
achieved in the same biology questions were compared. Students in the second
smaller cohort (n = 29) were all examined using both written and oral questions
concerning both ‘scientific’ and ‘personal development’ topics. Both cohorts
showed highly significant differences in the mean marks achieved, with better
performance in the oral assessment. There was no evidence of particular groups
of students being disadvantaged in the oral tests. These students and also an
additional cohort were asked about their attitudes to the two different
assessment approaches. Although they tended to be more nervous in the face of
oral assessments, many students thought oral assessments were more useful than
written assessments. An important theme involved the perceived authenticity or
‘professionalism’ of an oral examination. This study suggests that oral
assessments may be more inclusive than written ones and that they can act as
powerful tools in helping students establish a ‘professional identity’.
Keywords: oral assessment; authenticity; identity; performance; inclusive

Introduction
The oral examination (or viva voce), in which the candidate gives spoken responses to
questions from one or more examiner, is perhaps the oldest form of assessment; it has
certainly been traditional practice in some areas of academic life, such as the PhD viva
and clinical examination, for decades if not centuries. But, despite this antiquity it is
now rare or absent in many undergraduate courses. For example, Hounsell et al.
(2007) reviewed the recent UK literature on ‘innovative assessment’. Of 317 papers
considered, only 31 dealt with ‘non-written assessments’, and within this category
only 13% addressed the use of oral examinations; oral group presentations were by far
the most commonly cited non-written assessment, at 50% of the total sample.
The apparent rarity of the oral examination is surprising given its many possible
advantages. Five suggested key benefits are: first, the development of oral communi-
cation skills. These are seen as essential for graduates, which means these skills must
be explicitly taught and assessed (Wisker 2004). Second, oral examinations are more
authentic than most types of assessment (Joughin 1998). Virtually all graduates will

*Corresponding author. Email: m.huxham@napier.ac.uk

ISSN 0260-2938 print/ISSN 1469-297X online


© 2012 Taylor & Francis
http://dx.doi.org/10.1080/02602938.2010.515012
http://www.tandfonline.com
126
2 M. Huxham et al.

attend job interviews, and will have to defend their ideas and work in verbal
exchanges, whilst most will never sit another written examination after they graduate.
Third, oral assessment may be more inclusive. For example, Waterfield and West
(2006) report the views of 229 students with disabilities on different types of assess-
ment. Written exams were the least preferred type, whilst oral examinations consis-
tently came near the top; students with dyslexia were particularly likely to favour oral
assessments. Fourth, oral examinations are powerful ways to gauge understanding and
encourage critical thinking (Gent, Johnston, and Prosser 1999). Because of the possi-
bility of discourse and genuine exchange, oral examinations can allow a focus on deep
understanding and critique, rather than on the superficial regurgitation often found in
written examinations. Fifth, oral examinations are resistant to plagiarism (Joughin
1998); students must explain their own understanding using their own words.
In addition to these advantages, there is a deeper dimension to oral assessment that
involves fundamental distinctions between oral and written communication. The
Downloaded by [Universite Laval] at 19:13 26 December 2014

philosopher Frege emphasised the ambiguity and fluidity of language, and discussed
how the ability of spoken, as opposed to written, language to carry emotional charge
allowed it a flexibility and finesse not possible on the written page (Carter 2008). This
reflects a long-held position in philosophy, going back at least to Plato, that elevates
the spoken word above the ‘mere shadow’ that is the written (Joughin and Collom
2003). The idea that speech reflects, and creates, the person more accurately and fully
than writing has been developed more recently by Barnett, who considers how
students struggle in the ‘risky’ environment of higher education to find new ways of
defining themselves: ‘speech is one way in which individuals help to form their own
pedagogical identities. It has an authenticity that writing cannot possess’ (Barnett
2007, 89). Related to these ideas is the pervasive and important notion that higher
education at its best consists of dialogue and learning conversation. To adapt a phrase
from psychoanalysis, teaching is ‘an alchemy of discourse’ (Hayes 2009) from which
new understandings can arise. Hence there are fundamental reasons why higher
education might value oral assessments.
So why, despite these arguments, might oral examinations be rare? One obvious
reason could be the perception that they take a long time; individual interviews with
300 first years will generally be impossible (although it is worth considering the possi-
ble savings in time gained from not marking written work). But there is a more explicit
concern about reliability and bias. For example, Wakeford (2000, 50) advises: ‘The
new practitioner in higher education is counselled to beware of and avoid orals’, since
they may be open to bias; clearly, for example, anonymous assessment will be impos-
sible and producing evidence for external examiners is more difficult. There is a
concern too that oral examinations are very stressful, and might unfairly favour the
extravert and confident student (Wisker 2004). They are often seen as an ‘alternative
approach’ which might be valid for a minority of disabled students but which should
not apply to the majority (Waterfield and West 2006). In addition, oral examinations
may be seen as suitable for assessing more emotive or personal issues, such as the
ability to reflect, but as not appropriate for abstract reasoning: ‘only an exceptional
person would prefer to be judged on the basis of a spoken rather than written perfor-
mance when the assessment relates to complex abstract ideas’ (Lloyd et al. 1984, 586).
Hence despite the strong arguments in favour of oral examinations, tutors might
legitimately fear using them given pressures on time, warnings that they may not
reach transparent standards of reliability and may be biased against some students
and feelings that they are only for ‘special’ groups. There is currently little in the
Assessment & Evaluation in Higher Education 127
3

literature that might help a balanced assessment of the strengths and weaknesses of
oral versus written assessments (but see Joughin 2007). For example, there are to our
knowledge no explicit tests of performance in the same examination administered
orally and in writing to higher education students. The main aim of the current work
is to help fill this gap by performing such a test. In addition, we considered the
following questions: (1) Do the results in oral and written examinations differ
between different types of questions (in particular, between abstract ‘scientific’ ques-
tions and those requiring reflection on personal skills)? (2) Do students find oral
assessments more stressful than written assessments? (3) What do students feel are
the strengths and weaknesses of oral versus written assessments?

Methods
Student groups
Downloaded by [Universite Laval] at 19:13 26 December 2014

Three groups of students were involved as participants in this research. The largest
group was a first-year (Level 7) cohort of 99 biology students taking an introductory
module in evolutionary biology, 28% of whom were male and who ranged in age from
17 to 45 (with a majority in the 17 to 20-year age group). The second group included
29 third-year (Level 9) students taking a field methods module, with eight males,
ranging in age from 18 to 42. The third group included 18 third-year students, seven
of whom were males and ranging in age from 19 to 29, who studied the same field
methods module the previous year.

Randomised test
In October 2007 the first-year students were randomly allocated to either a ‘written’
or an ‘oral’ group. Students were told of their allocation four weeks before the assess-
ment, which was a small formative test designed to encourage review and revision of
module material before major summative assessments. After explaining the purpose
of the division into two groups, students were told that they could request a change of
group if they wished. The test involved seven short-answer questions that were taken
from a list of ‘revision points’ that students had already seen after lectures. Questions
dealt with evolution and ecology and were intended to test for understanding rather
than recall; for example, question two was: ‘What explanation can you give for the
fact that most wild plants have even, as opposed to odd, numbers of chromosomes?’,
whilst question three asked: ‘Birds and bats share the analogous similarity of wings.
What is meant by this phrase, and what has caused the similarity?’. Students allocated
to the ‘written’ group were given 30 minutes to answer the questions under standard,
silent examination conditions. Students allocated to the ‘oral’ group had a maximum
of 15 minutes in a one-to-one oral examination. The additional time allowed for the
written test was to compensate for the relative slowness of writing compared with
talking; experience in previous years had shown that the time allocated was more than
sufficient for full answers in both formats. A team of 10 volunteer interviewers was
involved. All the candidates came to a single room before their designated interview
slot, and they were accompanied from there to the interview room to prevent any
opportunity of speaking with previous candidates before the test. Interviewers
followed a standard interview protocol; questions were read out and were repeated if
the candidate asked. Interviewers were also permitted to clarify questions if asked, but
only by re-phrasing rather than by interpreting the question – appropriate clarification
128
4 M. Huxham et al.

was discussed between interviewers during training sessions before-hand.


Interviewers also endeavoured to generate a friendly and relaxed atmosphere.
Questions in the written and oral tests were marked on a scale of 0 (no answer or
completely wrong), 1 (partially correct) or 2 (correct and including all key points);
hence the maximum score was 14. Interviewers had standard marking sheets and had
discussed all the questions together before the interviews; they made short relevant
notes during the interview and then produced a final mark immediately afterwards,
before the next candidate arrived. Written questions were double-blind marked. At the
end of the written test and of each interview, all students were asked to complete a
very simple questionnaire with the single question ‘how nervous were you about
taking this test?’(answers from 0 ‘not at all nervous’ through 4 ‘very nervous’).
Mean scores were compared between ‘written’ and ‘oral’ groups using a t-test
(after testing for normality and heteroscedasticity). The distributions of responses to
the ‘nerves’ questionnaire were compared using a chi-squared test.
Downloaded by [Universite Laval] at 19:13 26 December 2014

Paired test
An oral examination with four questions – two ‘scientific analysis’ questions on a
field report submitted by the candidate and two ‘personal and professional develop-
ment’ questions asking for reflection on, for example, communication and group work
skills developed and used during the fieldwork – is the most important assessment
component in the ‘applied terrestrial ecology’ module taken by the third-year cohort.
Questions are specific to each candidate and are developed based on each individual’s
report and field performance. The usual test was modified in 2008 by the addition of
a written element, involving two additional questions (one ‘scientific analysis’ and
one ‘personal and professional development’). Questions were first devised for each
candidate and then selected at random for the oral or written component. All candi-
dates were taken initially to an examination room where they had eight minutes to
complete the written questions, before being led to the interview room for a 15-minute
oral examination.
In these interviews, the assessor quickly promoted a positive and friendly environ-
ment for the each student by providing a warm welcome, establishing a rapport
through use of their first names, clarifying what was to happen in the oral assessment
and thanking them for their report. The questions asked had a clear context (e.g. they
referred to a specific figure or table in the student’s field report) and where students
did not fully answer questions, they were asked another supplementary – although not
leading – question (e.g. if a student was asked ‘why did you choose to use an ANOVA
test for the data in Table 2?’ a supplementary question might be ‘under what general
circumstances do you use ANOVA?’).
Questions were marked on a seven-point scale (0 = no response, 1 = bad fail,
showing total lack of knowledge and failure to explain why approaches were taken, 2
= fail, showing superficial knowledge and reflection, 3 = bare pass, showing a basic
understanding but no knowledge of the broader context or evidence of wider reading
and synthesis of knowledge from elsewhere, 4 = clear pass, showing adequate knowl-
edge, some application of theory and evidence of reflection, 5 = good pass, showing
clear evidence of application of theory and of attempts to set the answer in a broader
context, 6 = excellent answer, showing clear understanding and an ability to place the
answer in a broad context of relevant literature or experience); one-third of the oral
examinations were double marked by two interviewers, and all written questions were
Assessment & Evaluation in Higher Education 129
5

double marked. Mean scores (out of the total of four questions in the oral and two in
the written tests) were compared, paired within candidates, using a paired t-test. Marks
were also subdivided into those for ‘scientific analysis’ and ‘personal and professional
development’ questions, and mean marks achieved in the oral and written tests for
these were compared using paired tests.

Qualitative evaluation
Three different sets of qualitative data were collected. First, students from the first-
year cohort were invited to participate in a focus group to discuss their experiences.
The discussion was facilitated by a member of staff from outside the programme team
and students were advised that staff involved in the module would not be present.
Equal number of students who had experienced the written and oral tests participated
and different student groups were invited including home, international, school-
Downloaded by [Universite Laval] at 19:13 26 December 2014

leavers, mature students, and males and females. The discussion was recorded and
students participating gave permission for their contributions to be used on the basis
that their input would be anonymous. To encourage participation, the invitation made
clear that their input was valued; they were also offered a sandwich lunch.
Qualitative feedback was collected from the third-year cohort in 2007, who took
an oral assessment identical to that described for the 2008 cohort but without the addi-
tion of the written component. This is the first time these students had experienced this
kind of viva voce test at university. After the tests had been marked and feedback had
been provided, students were asked by email to respond to the following statement:

Please describe how you felt the interview went. In particular, how did you perform
compared to a more conventional assessment (such as a written exam)? What do you
think the advantages and disadvantages of being assessed by interview are, and what
lessons can you learn from the experience?

Students in the third-year cohort in 2008 were also invited to participate in a focus
group to discuss their experiences of the viva voce. The focus group ran on the same
basis as that described for first-year students above.
Recordings from the focus groups were transcribed, and thematic analysis was
used on these transcripts and on the email texts to identify key themes and illustrative
quotes.

Results
Randomised test
Four students requested transfers from the group to which they had randomly been
assigned; two non-native English speakers asked to be moved from the oral to the
written examination. Two students also asked to transfer from the written to the oral
group; one on the grounds of dyslexia and one for undisclosed reasons.
A total of 91 students took the assessments (45 sat in the oral examination and 46
in the written one). The mean scores achieved in the oral and written tests were 8.17
and 6.24 respectively, a highly significant difference (two-sample t-test: t-value =
3.46, df = 89, p-value = 0.001; Figure 1). Separating by gender gave a highly signifi-
cant difference between females, with a difference of 2.03 between mean scores.
Males showed a similar trend, with orally assessed students doing better by 1.50
marks on average, however this was not significant (two-sample t-test: t-value = 1.39,
130
6 M. Huxham et al.
Downloaded by [Universite Laval] at 19:13 26 December 2014

Figure 1. Boxplots (showing medians, central line, interquartile range, box margins and out-
liers) of data obtained from the first-year students’ results in oral (n = 45) and written (n = 46) tests.

df = 26, p-value = 0.176). There was no significant difference in the marks given by
the two independent markers to the written test results.
The distributions of scores recorded in the ‘nerves’ questionnaire are shown in
Figure 1. Boxplots (showing medians, central line, interquartile range, box margins and outliers) of data obtained from the first-year students’ results in oral ( n = 45) and written (n = 46) tests.

Figure 2. There was a tendency for students to record higher scores (i.e. a greater
degree of nervousness) in the oral group, although this was not quite a significant
difference (chi-squared test: chi-Sq = 6.778, df = 3, p-value = 0.079).
Figure 2. Frequency distributions of self-reported ‘nervousness’ of first-year students who took the oral and written tests; 1 = ‘not at all nervous’, 4 = ‘very nervous’.

Figure 2. Frequency distributions of self-reported ‘nervousness’ of first-year students who


took the oral and written tests; 1 = ‘not at all nervous’, 4 = ‘very nervous’.
Assessment & Evaluation in Higher Education 131
7

Paired test
Twenty-four students completed the oral and written tests. There was a highly signif-
icant difference between the marks scored by each student in the oral (mean = 5.4) and
written (mean = 4.6) components (paired t-test: t = 3.84, p = 0.001). The better perfor-
mance in the oral assessment was consistent between question types with the signifi-
cant differences remaining for both the subsamples of ‘scientific analysis’ and
‘personal and professional development’ questions.

Qualitative evaluation
Fifteen (out of a total of 18) third-year students responded to the email request for
feedback in 2007 (comments from this group are henceforth indicated by ‘3rd 2007’).
In common with similar work seeking to capture the student voice (Campbell et al.
2007), recruiting participants for the focus groups proved problematic and only three
Downloaded by [Universite Laval] at 19:13 26 December 2014

first-year students (comments indicated by ‘1st 2008’) and four third-year students
(3rd 2008) attended their respective groups. However those who did attend contrib-
uted their views enthusiastically and perceptively.
An important theme in the student responses concerned anxiety; seven students in
the 2007 cohort mentioned feeling particularly nervous in the face of the interview,
and this was also raised in the focus groups:

I felt I did poorly in the oral exam, however I can honestly say that much of this was
down to nerves. I felt uncomfortable and was concentrating so hard on trying to sound
professional and not make mistakes. (3rd 2007)
You had to think [quickly] and then you are thinking you will be short on time and so
you panic. (3rd 2008)

However, two students in 2007 and two in the focus groups said they felt less nervous
than in written examinations. Students also identified interviews as challenging
because they required real understanding:

In comparison to a conventional exam I thought it was just as challenging, if not a little


more. To be able to cram for an exam and put it all down on a piece of paper is one thing,
but to be able to talk about a subject, clearly and concisely, you have to really understand
it, and I think that is the challenge in an interview. (3rd 2007)
You need to understand what you are saying, what you are trained to explain. (3rd 2008)

Despite the reported anxiety, 13 of the students stated explicitly that they preferred the
oral examination to a traditional written one, whilst only four stated that they would
have preferred a written test. Most of the students valued the opportunity to practice
interview skills and gain relevant experience:

I think having an assessed interview is a good idea. It give me an insight into what I’ll
inevitably have to deal with in the future, interview skills don’t come naturally so I think
the more practice we get the better equipped we’ll be for leaving university and applying
for jobs. (3rd 2007)

One student described preferring an interview because he was dyslexic.


An additional theme concerned how easy it was to express thoughts and opinions
in the two formats, with some students identifying oral communication as more ‘natural’:
132
8 M. Huxham et al.

I did keep thinking back, thinking ‘they are next door saying what they mean and I am
struggling to put down on paper’. (1st 2008)

I thought it was easier to explain yourself and explain what you are doing to a person
rather than trying to [write it down]. Its easy to get muddled up with your words and try
to explain something in writing. If you talk to someone in person it’s a lot more natural.
(3rd 2008)

Discussion
Students performed better in oral compared with written tests; this result was consis-
tent between year groups, between different types of questions and when using paired
and unpaired designs. There are a number of possible explanations for this strong
effect, including bias in the assessment procedures. The famous case of ‘clever’ Hans,
‘the counting horse’ illustrates the potential influence of unconscious cues from the
Downloaded by [Universite Laval] at 19:13 26 December 2014

interviewers (Jackson 2005). Hans was able to ‘count’ by stamping its hoof until its
owner unwittingly signalled when to stop. Such effects may have occurred in our
study (although, of course, the current questions were much more complex and less
open to simple cues than counting). We agreed with standard interview procedures
which excluded explicit prompts and encouragement, but did not curtail all normal
social interaction. We were concerned to preserve the ‘ecological integrity’ of the
interviews and wanted to avoid the highly artificial circumstance of interviewers
simply speaking a question and then remaining silent, like disembodied recorders.
Instead the experience was designed to be much closer to an authentic viva voce or job
interview. The current study was therefore not designed as tightly controlled psycho-
logical research, but rather as a comparison of oral and written assessments under real-
istic educational settings. As such, the possible existence of ‘clever Hans effects’ can
be regarded as an integral part of most oral assessments, in the same way that the abil-
ity to write legibly and quickly is integral to most written assessments. There were no
a-priori expectations that the oral performances would be better; in fact, given the
suggestions that oral assessments can lead to bias against certain groups of students
and can induce stress, a significantly worse performance seemed equally likely.
The current work supports the evidence that oral assessments might induce
more anxiety than written ones. The quantitative comparison approached signifi-
cance (Figure 2), and anxiety was an important theme raised in the qualitative
responses. However this is not necessarily negative, indeed it may explain the
better average performance, with students preparing more thoroughly than for a
‘standard’ assessment. Interestingly a majority of the third-year students, who
chose to identify anxiety as a feature of the oral assessment, nevertheless stated
that they preferred it to a written test. In his phenomenographic study of student
experiences of oral presentations, Joughin (2007) found that greater anxiety about
oral compared with written assessment was associated with a richer conception of
the oral task as requiring deeper understanding and the need to explain to others.
Thus anxiety was a product of deeper and more transformative learning. The
reported anxiety might also simply reflect the relative lack of experience in oral
compared with written assessments, which was a point made explicitly in the qual-
itative evaluation:

I think the oral is quite different from the writing and we should have some training
because we don’t have experience. (3rd 2008)
Assessment & Evaluation in Higher Education 133
9

As with all types of assessment, it is likely that oral examinations will suit some learn-
ing styles and personalities better than others. It is not surprising that students with
dyslexia might favour oral assessments (Waterfield and West 2006). The current
research lends qualitative support to this idea, with two first-year students identifying
dyslexia as the reason why they chose to swap from the written to the oral group and
students raising the issue in the evaluation:

Before we actually did the [written] test I was a bit apprehensive as I have really bad
spelling so I do get quite conscious about that. (1st 2008)

I think I performed to a higher standard than in written tests. The reason for this I have
dyslexia, and dyspraxia, so reading and writing for me has always been harder than just
plain speak. (3rd 2007)
Downloaded by [Universite Laval] at 19:13 26 December 2014

However, there is no support here for the notion that oral assessments should be
regarded as somehow marginal or suited only for ‘special’ groups of students.
Although sample sizes were not sufficiently large to allow multiple sub-divisions into
different social and demographic groups, there was no evidence that particular types
of students did worse at orals. Although the discrepancy in mean marks obtained in
oral compared with written tests was not as large for male as for female students, the
trend was the same and the lack of significance may have been a result of lower
sample sizes. Clearly it would be interesting to investigate possible gender differences
further, but our results do not suggest males would be disadvantaged by using oral
assessments.
Because oral language may generally carry a bigger ‘emotional charge’ than writ-
ten (Carter 2008), and of course is supplemented in most cases with a range of body
language that can transmit emotional messages, it may be true that oral assessment
will be better fitted to affective and reflective tasks. In contrast the enunciation of
complex abstract ideas might be easier in writing; a clear example would be mathe-
matics. These arguments might suggest the promotion of oral assessments specifically
for developing and measuring reflective skills, whilst abstract conceptual thinking
should be assessed using traditional written formats. However, the current work
showed no such distinction. The first-year cohorts were tested on theoretical, abstract
ideas such as ‘the argument from design’ and aspects of nitrogen cycling in ecosys-
tems, and yet, students performed better on these questions when responding orally.
The third-year students were assessed on questions divided into ‘scientific analysis’
and ‘personal and professional development’ categories, but a similar result of better
performance in the oral compared with written responses was found for both. Hence
there is no support here for the idea of restricting oral assessments to ‘special’ or
emotional categories of learning. The Third International Mathematical and Science
Study (TIMMS) programme tested thousands of children using the same standard
written tests in different countries to allow international comparisons. Schoultz, Säljö,
and Wyndhamn (2001) interviewed 25 secondary school children using two TIMMS
questions on physics and chemistry concepts. They found much better performance in
the oral tests than the average scores in the written tests for children of the relevant
age; their qualitative analyses showed that their subjects often understood the core
concepts being tested but failed to interpret the written questions correctly without
guidance. Hence the ability to re-phrase the question in an oral setting allowed a genu-
ine test of students’ conceptual understanding, and thus better performance. A similar
134
10 M. Huxham et al.

effect may explain some or all of the differences we found, and we endorse their
recommendation to challenge the often implicit assumption that ‘responding to
abstract questions in writing is the natural context in which knowledge appears’ (234).
A long tradition in philosophy and discursive psychology views language as
constitutive rather than simply transmissive; people create key aspects of their reality
(particularly their social and subjective realities) through language and especially
through ‘speech acts’. This tradition is concerned with language as a form of social
action, which helps construct such attributes as ‘the self’ during conversation and
discourse (Horton-Salway 2007). This discursive approach, related to Barnett’s idea
of students creating ‘pedagogical identities’ through speech (Barnett 2007), can help
interpret an important theme in the experiences reported by the students concerning
the performative aspects of the viva. One reason students reported greater anxiety was
because they were ‘performing’ in a social space:
Downloaded by [Universite Laval] at 19:13 26 December 2014

This experience has taught me that it is really important to prepare as much as possible
for an interview. There is a big difference between going over things in your head and
saying them out loud clearly and confidently. (3rd 2007)

There was a perception that the oral interview required a different approach from a
written test:

I think that an oral exam allows people to use grammar and words that they may not use
when writing. (3rd 2007)

With a lot of written assessments, I think, you just memorise the paragraph like a parrot
and not know what it means. But you can tell when someone is doing that when you
speak to them because they get that glazed look in their eyes as they recite it. (1st 2008)

This different approach was seen as being more ‘professional’:

I felt uncomfortable and was concentrating so hard on trying to sound professional and
not make mistakes. This is why I was reluctant to use the word ‘niche’, I thought 95%
that it was the correct word to use. (3rd 2007)

There is an impression here of students striving to create ‘professional’ and ‘confi-


dent’ personalities (Gent, Johnston, and Prosser 1999). Zadie Smith describes one
of her working-class characters using the words ‘modern’ and ‘science’: ‘as if
someone had lent him the words and made him swear not to break them’ (2000,
522); the oral assessments involved students using professional language without
‘breaking it’.
Written examinations do not seem to elicit the same feelings, perhaps because such
examinations are so strongly identified with the worlds of school and college, rather
than work, and perhaps because they are usually private and anonymous:

Because we have done [written assessments] since we have been in school, its normal
for us but once you leave school/education you will never need [to do them] again
whereas talking to somebody you will always use. (3rd 2008)

Whilst most academics recognise how assessments can drive student learning, they
may not appreciate how the mode of assessment – including the ‘social performance’
of the assessment – may shape students’ approaches and even identities.
Assessment & Evaluation in Higher Education 11
135

In her discussion of the power of the spoken word in the ancient world, Karen
Armstrong describes Socrates’ low opinion of the written text compared with the vivi-
fying effect of living dialogue: ‘Written words were like figures in a painting. They
seemed alive, but if you questioned them they remained “solemnly silent”. Without
the spirited interchange of a human encounter, the knowledge imparted by a written
text tended to become static’ (2009, 64). There is a sense of fluidity, of students ‘trying
things out’ during the interchange of the oral assessment – this exploration might be
of identities but also of concepts such as ‘niche’. This stands in contrast to the ‘static’
representation in written assessments, and is a powerful endorsement of the use of oral
assessments. The current work has found no evidence of disadvantage accruing from
oral assessments to particular groups of students, nor of the need to restrict orals to
particular types of questions. Rather our quantitative and qualitative results suggest
important benefits to students from their use. Our sample size was relatively small and
was restricted to biology students at a single institution; if our results prove represen-
Downloaded by [Universite Laval] at 19:13 26 December 2014

tative of broader groups of students, then they support attempts to uphold and enhance
the ‘spirited interchange’ of the oral as a form of assessment in higher education.

Notes on contributors
Mark Huxham is a reader in environmental biology with research interests in estuarine ecology
and climate change as well as in student learning and assessment.

Fiona Campbell is head of professional development with interests in the student voice and
staff development.

Jenny Westwood is personal development team leader in student affairs with special responsi-
bility for the ‘confident futures’ programme.

References
Armstrong, K. 2009. The case for God: What religion really means. London: Bodley Head.
Barnett, R. 2007. A will to learn: Being a student in an age of uncertainty. Maidenhead:
McGraw-Hill/Open University Press.
Campbell, F., L. Beasley, J. Eland, and A. Rumpus. 2007. Hearing the student voice project:
Promoting and encouraging the effective use of the student voice to enhance professional
development in learning, teaching and assessment within higher education. http://
www2.napier.ac.uk/studentvoices/profdev/download/Final_report_studentvoice_web_v2.
pdf. Edinburgh: Napier University. Final report.
Carter, M. 2008. Frege’s writings on language and the spoken word. http://western-philosophy.
suite101.com/article.cfm/freges_writings_on_language_and_spoken_word#ixzz0HgAdX
4Pl&D (accessed December 13, 2009).
Gent, I., B. Johnston, and P. Prosser. 1999. Thinking on your feet in undergraduate computer
science: A constructivist approach to developing and assessing critical thinking. Teaching
in Higher Education 4, no. 4: 511–22.
Hayes, J. 2009. Who is it that can tell me who I am? London: Constable.
Horton-Salway, M., ed., 2007. Social psychology: Critical perspectives on self and others.
Milton Keynes: Open University Press.
Hounsell, D., N. Falchikov, J. Hounsell, M. Klampfleitner, M. Huxham, K. Thompson, and S.
Blair. 2007. Innovative assessment across the disciplines: An analytical review of the
literature. York: Higher Education Academy.
Jackson, J. 2005. Clever Hans. A horse’s tale. http://www.skeptics.org.uk/article.php?dir=
articles&article=clever_hans.php (accessed January 15, 2010).
Joughin, G. 1998. Dimensions of oral assessment. Assessment & Evaluation in Higher Education
23: 367–78.
136
12 M. Huxham et al.

Joughin, G. 2007. Student conceptions of oral presentations. Studies in Higher Education 32,
no. 3: 323–36.
Joughin, G., and G. Collom. 2003. Oral assessment. http://www.heacademy.ac.uk/assets/York/
documents/resources/resourcedatabase/id433_oral_assessment.pdf (accessed September
13, 2010).
Lloyd, P., A. Mayes, A. Manstead, P. Meudell, and H. Wagner. 1984. Introduction to psychology.
An integrated approach. London: Fontana.
Schoultz, J., R. Säljö, and J. Wyndhamn. 2001. Conceptual knowledge in talk and text: What
does it take to understand a science question? Instructional Science 29: 213–36.
Smith, Z. 2000. White teeth. London: Penguin Books.
Wakeford, R. 2000. Principles of assessment. In Handbook for teaching and learning in higher
education, ed. H. Fry, S. Ketteridge, and S.A. Marshall, 42–61. London: Routledge.
Waterfield, J., and B. West. 2006. Inclusive assessment in higher education: A resource for
change. Plymouth: University of Plymouth. http://www.plymouth.ac.uk/pages/view.asp?
page=10494 (accessed January 15, 2010).
Wisker, G. 2004. Developing and assessing students’ oral skills. Birmingham: Staff Education
and Development Association.
Downloaded by [Universite Laval] at 19:13 26 December 2014

You might also like