Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

Learning Connection—Teaching Guide

Assessment: Using objective assessments


▪ Introduction ▪ Writing and using objective
assessments
▪ Different types of objective assessment ▪ Evaluation of objective assessments
▪ Different levels of understanding ▪ Conclusion
▪ Objective assessment and Graduate ▪ Resources and references
Qualities

Introduction
Objective assessment is a generic term referring to tests where the marking is objective as there are
clear right and wrong answers. Objective assessments can be summative, where marks are used to
calculate a final grade for the student, or formative, where student efforts are met with feedback on
their performance that does not directly contribute to final grades. Objective assessments can be set
to test a wide topic area, albeit sometimes superficially. For large classes, it represents an efficient
way of testing a large number of students rapidly and within short timeframes, particularly when
computers are employed to assist marking. As with all forms of assessment it is necessary to align
assessment with the desired learning outcomes for the course. The writing of appropriate questions
(also called items) and answer options, including ‘distractors’, can be a complex exercise. This
Guide outlines different types of objective assessments and examines their use in regard to the level
of student understanding that they test and the desired graduate quality learning outcomes required.
It also offers advice and further resources for writing and evaluating objective assessments that are
fair and designed to allow students to accurately measure their level of understanding.

Different types of objective assessment


There are several types of objective assessment, falling into 3 main groups: true false, multiple
choice and extended matching types.
True/False
True/False questions, in their simplest form, comprise a statement (or stem) that contains only one
idea, to which the student selects either true or false in response. These questions can also be nested
into a scenario, for example:
In a 3 year old female patient who has recently been diagnosed with acute lymphocytic
leukaemia you would consider treatment with

T F Allopurinol
T F Cytosine arabinoside
T F Cytarabine
T F Methotextrate

Multiple choice
These objective assessments contain a question stem, and normally 4 answer options, one of which
is correct and the others not. A variant of this is multiple response questions, where more than one
option is correct. Multiple choice questions can contain more elaborate questions that require

Developed by Learning Connection © University of South Australia, August, 2002, version 2


page 1
significant interpretation and analysis. These have been called ‘context dependent questions’.1
Multiple question stems and options can arise from a single context. Contexts can include case
studies, scenarios and may include graphical and tabulated data to be analysed.
Extended matching
Extended matching assessments normally consist of a group of questions that are written around a
single theme. A longer list of answer options is used repeatedly for a series of question stems, each
worth a mark. The student must select the most appropriate answer option from the one long list for
each of the question stems.2

Different level s of understandi ng


When looking at designing any assessment, it is worthwhile considering the depth of understanding
the student will need to use to complete the task. John Biggs, in his book ‘Teaching for quality
learning at University’ describes the range of understanding that students use in learning and relates
them to verbs that are used in assessment. He writes,
High-level, extended abstract involvement is indicated by such verbs as ‘theorize’,
‘hypothesize’, ‘generalize’, ‘reflect’, ‘generate’ and so on. They call for the student to
conceptualise at a level extending beyond what has been dealt with in actual teaching.
The next level of involvement, relational, is indicated by ‘apply’, ‘integrate’, ‘analyse’,
‘explain’ and the like; they indicate orchestration between facts and theory, action and
purpose. ‘Classify’, ‘describe’, ‘list’ indicate a multi-structural level of involvement:
The understanding of boundaries, not but not of systems. ‘Memorise’, ‘identify’,
‘recognise’ are unistructural: direct, concrete, each sufficient to itself, but
minimalistic.3

He argues that most objective assessments, in particular multiple choice questions, prompt students
to use superficial learning skills, where students are required to memorise facts and identify correct
answers from incorrect ones. Studies have shown that students mostly use superficial study
techniques to prepare themselves for multiple choice question texts.4 Partly for these reasons, it is
recommended that summative objective assessments contribute only a proportion of the final grade,
and the remaining assessments examine higher order understandings. It is also advised that a
considerable amount of effort is put into the construction of good tests, which examine not only uni-
structural and multi-structural understandings, but extend this to relational, and perhaps even
extended abstract levels of understanding.

Objective assessment and Graduate Q ualities


Many objective assessments aim only to measure the uni-structural and multi-structural levels of
understanding of the body of knowledge of the topic (Graduate Quality 1). This level of test may be
suitable in the early stages of a course or program when terms, definitions and concepts are being
introduced.
When objective assessment is formative, incorporating feedback, it allows students to critically
evaluate their own understanding of an area and determine what is required to meet the desired
learning outcomes, thus assisting in the development of Graduate Quality 2. UniSAnet online
quizzes can be used to create flexible and student-centred learning environments.
Objective assessments can include more complex types (eg context dependent questions) that can be
used to evaluate students’ problem solving abilities, thus developing Graduate Quality 3. For
example, they may require analysis of case studies and scenarios that incorporate images, graphs, or
tables of data.

Developed by Learning Connection © University of South Australia, August, 2002, version 2


page 2
Writing and using objective assessments
General
▪ Focus on the desired student outcomes.
▪ Make sure that the standard of the question is appropriate to the group being tested.
▪ Make objective assessments only a portion of the total student assessment.
▪ For formative quizzes embed feedback for correct and incorrect answer options.
▪ If quizzes are being used for summative assessment, prepare a trial formative quiz for students
to practise on.
▪ Use smaller, frequent quizzes and tests instead of a single giant one.
▪ Write several banks of equivalent questions and rotate their use in examinations.
▪ If writing questions that are requiring higher order skills (e.g. problem solving) check the mark
allocation to the question. If you wish to award extra marks for this type of question then this
information would need to be available to students on the examination paper.
▪ Remember to include instructions to candidates on the examination paper. Instructions should
state ‘choose the best answer’ or ‘choose all those that apply’ as appropriate.
▪ Many people who have only prepared one set of objective assessment items will organise for
examination papers not to leave the examination room.
▪ Some people have used Endnote to store banks of objective assessment items for easy selection
and assembly into different examination papers.5 However, if you are considering using online
summative objective tests, consider using TellUs2 to build and store your question bank.6
Writing question stems
▪ Include as much as possible of the item in the question, however the question should be as brief
as possible.
▪ Students should know what is being asked from the stem (not from the choices).
▪ Help students see CRUCIAL words in the stem by making them all capitals or italics.
▪ Don’t lift questions from text books.
▪ Use negative questions when there are more right answers than wrong.
▪ Avoid double, or triple negatives.
▪ Do not include 2 concepts in the same question.
▪ Be careful about grammatical clues that indicate the correct option (e.g. plural, singular, tense, a
or an).
▪ You should consider using graphics to illustrate the question.
Writing answer options
▪ Avoid obviously wrong answers, all alternatives should be reasonable, with one clearly better
than the others
▪ Ensure that answer options all relate to the question in some way
▪ Use at least 4 alternatives, unless using True/False objective tests
▪ Use ‘all of the above’ phrases sparingly
▪ Avoid using ‘always’ and ‘never’. These absolutes are giveaways!
▪ Avoid making correct answers longer or shorter than the distractor answer options

Developed by Learning Connection © University of South Australia, August, 2002, version 2


page 3
▪ Alternative answers should be grammatically consistent with the question.
▪ Vary the position of the correct answer (Note: if using UniSAnet quizzes the answer options
can be automatically randomized).
Writing feedback
▪ Try to give feedback for each incorrect answer, explaining why it is incorrect, and how the error
in understanding may have arisen.
▪ If possible, link feedback to areas in the program or text book that are relevant (if online,
hyperlinks can be used)
▪ Include motivational feedback for correct answer selections.
Marking objective assessments
One of the benefits of objective assessment is that the marking of student work can be streamlined
in various ways.
▪ Non-academics can be employed to assist with the marking of paper-based objective
assessments
▪ Computers can be used to scan and mark paper-based objective tests
▪ Online objective assessments can be delivered, marked and analysed using computers.
Another related Teaching Guide, Assessment: Computer assisted assessment, covers this area in
greater detail.

Evaluation of objective assessments


It is possible to evaluate objective assessments across the whole test (frequency histograms of
students’ scores and determining average scores) and item by item (difficulty index and
discrimination index), to determine if the questions are pitched at the appropriate level or are
misleading, requiring revision. This Guide will expand on the topic of individual item evaluation as
these parameters are not as commonly available as frequency histograms and averages. A complete
example of analysis of objective assessment data is available.7
The difficulty index
This parameter allows questions to be ranked in order of difficulty.
number of students giving the correct answer for the item
Difficulty =
the number of students taking the test
Ranges from 0.0 to 1.0 (recommend that questions that are <0.10 or >0.90 be revised or discarded).
The discrimination index
This parameter allows you to determine how well each item was able to discriminate the abilities of
students. Various formulas are available, as are ways of determining the proportion of students that
define the ‘upper’ and ‘lower’ groups.
number of students who gave number of students who gave
Discrimination = the correct answer for the the correct answer for the
_
Index item and were in the upper item and were in the lower
group group
number of students in one of the groups
Ranges from –1.0 to +1.0, (recommend that questions that are <0.00 be revised or discarded).

Developed by Learning Connection © University of South Australia, August, 2002, version 2


page 4
An example may be 100 students take the test, the top 25 form the upper group, the bottom 25 form
the lower group. Only 15 of the top group got item 1 correct, while only 2 of the bottom group got
item 1 correct. The discrimination index for item 1 would be (15-2)/25 = 0.52.
Performing an item analysis
Item analysis of objective assessment with 5 questions (1-5) each with 5 options (A-E). The number
of students giving each answer option is tabulated.
ITEM A B C D E Diff Discr Omit Revise
1 10* 2 3 8 2 0.4 0.3
2 8 7 5* 4 1 0.2 -0.4 ###
3 1 4 5 15* 0 0.6 0.7
4 20 5 0* 0 0 0 0.1 ###
5 5 6 6* 3 5 0.2 0.2
* = correct answer; number of students taking test =26.
Item 2 needs to be revised as the discrimination index is low (-0.4). Item 4 needs to be discarded or
revised as both the difficulty and discrimination indexes are too low.
Having difficulty and discrimination data available for questions facilitates the selection of items of
equivalence for tests offered to students in subsequent iterations of the course.

Conclusion
Objective assessment, as a proportion of the total assessment, is recommended if it can be designed
in such a way that it addresses the desired learning outcomes of the student group and assists in the
development of Graduate Qualities. The time investment is often balanced by the potential
automation of the marking process. The design and maintenance of objective assessments can be a
labour intensive job. As a formative assessment tool, objective assessments with embedded
feedback are potentially an extremely valuable student-centred learning tool.

Resources and references


An online resource addressing most issues of multiple choice questions can be found in the
University of Capetown’s manual of designing and managing multiple choice questions
http://www.uct.ac.za/projects/cbe/mcqman/mcqcont.html (accessed 30th August 2002).

1
Cannon, R. and Newble, D. (2000) A handbook for teachers in Universities and colleges: A guide to
improving teaching methods (4th edition). Kogan Page, London, pp 187-188.
2
See an example of extended option questions in Cannon, R. and Newble, D. (note 1 above) pp 187-188.
3
Biggs, J. (1999) Teaching for quality learning at University, the Society for Research into Higher Education
and Open University Press, Buckingham UK, p 46 (emphasis added).
4
See Scouller, K. (1998) The influence of assessment methods on students’ learning approaches: multiple
choice question examination vs. essay assignment, Higher Education, 35, 453-472; and Scouller, K. (1997)
Students’ perceptions of three assessment methods: assignment essay, multiple choice question examination,
short answer examination, Paper presented to Higher Education Research and Development Society of
Australasia, Adelaide, 9-12 July.
5
See this Endnote story at http://www.endnote.com/App_Note20.htm (Accessed 31st August 2002).

Developed by Learning Connection © University of South Australia, August, 2002, version 2


page 5
6
Another teaching guide in this series, ‘Assessment: Using computer-assisted assessment’ offers advice on
the use of TellUs 2 for objective assessments.
7
See Christine Ballantyne’s example of an item analysis of 209 students participating in a 40 item multiple
choice test http://cleo.murdoch.edu.au/evaluation/pubs/mcq/score.html (Accessed 30th August 2002).

Teaching guides This is one of a series of guides on teaching and learning at the University of South Australia
prepared by staff from Learning Connection. Other guides can be accessed at
http://www.unisanet.unisa.edu.au/learningconnection/teachg/index.htm

For further • Talk to your Dean: Teaching and Learning


information • Visit Learning Connection on your campus or online at
http://www.unisanet.unisa.edu.au/learningconnection/staff.htm

Developed by Learning Connection © University of South Australia, August, 2002, version 2


page 6

You might also like