Professional Documents
Culture Documents
10 Higher Education Pedagogy
10 Higher Education Pedagogy
10.1 Introduction
Three things have dominated my thinking in the context of this chapter.
The first relates to the advice given to me by one of the commissioning
editors: that the chapter should address what “someone with no prior back-
ground who is thinking of getting into some educational research (e.g., a com-
puting teacher) [should] know before they start.” The resulting conundrum
is how to summarize effectively and usefully several centuries of educational
research and thinking into one quite short chapter. The way that I have tried
to do this is to encourage readers to consider the nature of learning in higher
education as comprising “what we know,” “what skills we have to put our know-
ledge into effect,” and “what we choose to do with the knowledge and skills at
our disposal.” This division has certainly helped me to articulate what I know
about this topic and to provide computing education (CEd) researchers with
some key questions to punctuate their reading. The approach may even stimu-
late a research agenda for CEd researchers.
The second addresses the nature of educational research, with particular ref-
erence to researching educational practices from within a discipline. Boyer is
credited with reminding the academic world that academic tasks need to be
undertaken in a scholarly manner. Boyer went on to provide some ideas about
what scholarship actually entails and how we might, as a profession of schol-
arly teachers, undertake its evaluation or assessment. Boyer published his
important work on academic scholarship in 1990 and followed it in 1996 with
some observations on assessing scholarship (Boyer, 1990, 1996). Boyer identi-
fied four broad categorizations of scholarship (discovery, essentially research;
integration, emphasizing multidisciplinary approaches; application, generally
referred to nowadays as engagement; and teaching) and conceptualized these
as interacting with one another as a university professor goes about his or her
everyday work. Some might suggest that the focus of Boyer’s papers was to
address an increasing disconnect between university research, which by and
large was scholarly, and teaching, which was often less scholarly. For each of
his scholarships, Boyer suggested six standards: clear goals, adequate prepar-
ation/appropriate procedures, appropriate methods, significant results, effective
presentation, and reflective critique. Boyer went on to identify four approaches
that should in general terms be used to evaluate the quality of the scholarship
276
Downloaded from https://www.cambridge.org/core. University of Sussex Library, on 02 Mar 2019 at 11:05:34, subject to the Cambridge Core terms of use, available at
https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108654555.011
277
Downloaded from https://www.cambridge.org/core. University of Sussex Library, on 02 Mar 2019 at 11:05:34, subject to the Cambridge Core terms of use, available at
https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108654555.011
278
278 SH E PH ARD
Downloaded from https://www.cambridge.org/core. University of Sussex Library, on 02 Mar 2019 at 11:05:34, subject to the Cambridge Core terms of use, available at
https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108654555.011
279
Downloaded from https://www.cambridge.org/core. University of Sussex Library, on 02 Mar 2019 at 11:05:34, subject to the Cambridge Core terms of use, available at
https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108654555.011
280
280 SH E PH ARD
not find the same acceptance outside of the particular discipline as research
that is.
So what’s that about big educational ideas?
Downloaded from https://www.cambridge.org/core. University of Sussex Library, on 02 Mar 2019 at 11:05:34, subject to the Cambridge Core terms of use, available at
https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108654555.011
281
Downloaded from https://www.cambridge.org/core. University of Sussex Library, on 02 Mar 2019 at 11:05:34, subject to the Cambridge Core terms of use, available at
https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108654555.011
282
282 SH E PH ARD
are expected to learn in the course, rather than what teachers hope to teach. The
teaching becomes student-centered rather than teacher-centered. O’Neill and
McMahon (2005) reviewed much of the relevant literature in this area.
Biggs (1999) describes two interacting variables that influence the effect-
iveness of learning. The first variable is the level of engagement adopted by
learners. At one level, they may be simply trying to memorize what is taught;
a relatively lowly cognitive process. At a higher cognitive level, they may be
involved in theorizing, reflecting, and abstracting about what has been taught.
The second variable is the extent to which the teaching method obliges learners
to be actively involved. Problem-based learning, for example, requires activity
by learners, whereas a standard instructional lecture may not. It is then possible
to describe how individual learners approach their learning in relation to these
two variables. Some will require very little teacher-induced activity to be highly
engaged with the task, while others will only engage when required to, such as
by the imposition of a required learning activity. Biggs maintains that “good
teaching is getting most students to use the higher cognitive level processes that
the more academic students use spontaneously” (Biggs, 1999, p. 4). The variable
that teachers have some control over is their teaching method, and many of the
teaching approaches that can be adopted to encourage learners to use higher-
cognitive-level processes fall into the category of learner-centered approaches to
teaching. Naturally, there is another complication, in that the teaching methods
likely to work best depend to a large extent on the circumstances.
Downloaded from https://www.cambridge.org/core. University of Sussex Library, on 02 Mar 2019 at 11:05:34, subject to the Cambridge Core terms of use, available at
https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108654555.011
283
learning, and therefore be most likely to extend beyond the constraints of the
course, or degree, both temporally and spatially, as well as to ask what forms of
learning will be most valued by their students. In the long term, will students
most value the knowledge and skills that they are learning, or might students be
simultaneously learning and valuing opportunities to consider what they might
wish to do with this knowledge and these skills?
Downloaded from https://www.cambridge.org/core. University of Sussex Library, on 02 Mar 2019 at 11:05:34, subject to the Cambridge Core terms of use, available at
https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108654555.011
284
284 SH E PH ARD
10.4.1 Assessment
When it comes to assessment, those who appreciate the benefits of constructive
alignment, outcome-based assessment, and the ILO do have a substantial advan-
tage over other educators who do not. Advocates for constructive alignment and its
Downloaded from https://www.cambridge.org/core. University of Sussex Library, on 02 Mar 2019 at 11:05:34, subject to the Cambridge Core terms of use, available at
https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108654555.011
285
logical consequential processes will have on their mind the nature of the intended
learning outcome involved when they design their assessment. They will, for
example, find it easier to describe learning outcomes in the cognitive domain than
in the affective domain, and similarly, find it far easier to describe lower-order cog-
nitive learning (such as remembering and understanding) than higher-order cogni-
tive learning (such as application, synthesis, and evaluation). This ease of describing
outcomes translates directly into ease of designing assessments that explore the
attainment of these outcomes. Formal, traditional written examinations are likely
unsurpassed as tools to assess what students have remembered or can explain.
Practical examinations involving computers are likely good tools to assess what
students can actually do with the computer. Neither are necessarily ideal tools to
allow students to demonstrate their higher-order cognitive skills such as creativity
and the ability to evaluate the worth of their knowledge. Nor are they necessarily
tools to enable students to demonstrate a range of outcomes involving both cog-
nitive and affective learning, such as teamwork or behaving ethically. Supervised
project work may provide better tools for these purposes.
The same framework helps teachers to address a range of other pedagogical
questions. Assessment theory demands that assessments are both valid (in that
they actually assess what it is that the students are supposed to have learned
and not something else) and reliable (in that a given student with a given level
of learning would likely achieve similarly in a different but similar assessment).
By constructively aligning the assessment with the ILO, teachers will find it rela-
tively straightforward to ensure validity (by ensuring that the assessment provides
learners with an opportunity to demonstrate their attainment of the learning
outcome) and reliability (by designing a range of assessments, all of which will
adequately assess the intended outcome). Alternative approaches do exist, of
course, and no doubt some higher education teachers do design their assessments
in the context of “let’s keep the field open and see what students produce.”
Higher education teachers often ask –sometimes in the context of the idea
that “assessment somehow drives learning” (rather than perhaps what we would
prefer as “learners’ interests drive learning” or “motivating teachers drives
learning”) –if all of the intended outcomes do need to be assessed, or just some
of them. For a course with many detailed intended outcomes, such an approach
could place substantial assessment burdens on both the teacher and student.
There is no doubt that higher education assessment nowadays is a more sub-
stantial enterprise than it was when I was a student. In days gone by, assessment
was addressed in end-of-year exams, whereas nowadays assessments in some uni-
versities, some subject areas, and some departments appear to occur every week
with assessed essays and assignments and project reports, as well as end-of-year
exams. The scenario plays out differently in different topics, but in general terms,
such a question brings into consideration additional assessment topics.
• Many intended outcomes are consequential on others and therefore appear
in nested arrangements. Formal assessments can focus on the major intended
outcomes and in so doing can address the minor, more basic outcomes.
Downloaded from https://www.cambridge.org/core. University of Sussex Library, on 02 Mar 2019 at 11:05:34, subject to the Cambridge Core terms of use, available at
https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108654555.011
286
286 SH E PH ARD
• Although the process of designing a complex set of ILOs may be a great asset
to a university teacher as they design their teaching and their assessments,
too much complexity will almost certainly detract from the authenticity of
the learning and teaching situation and of the assessment. Authenticity in
this context refers to the nature of the task that a professional might engage
in. Often this is in the form of a complex report or the creation of a product,
rather than the detail of a particular action. Teachers will need to assure them-
selves that the students are capable of undertaking the particular actions, but
they may do this best by designing their assessment to be as authentic, to the
professional, as possible. Biggs and Tang provide extensive advice on teaching
and assessing “functional knowledge” that, at its heart, needs to be authentic
(Biggs & Tang, 2007).
• Arguably, the majority of assessment in higher education ought to be forma-
tive in nature rather than summative. Formative assessment enables teachers
to provide feedback to students in a way that does not jeopardize their final
grade for any particular course. This feedback enables students to reassess
their mental models of whatever is being taught and to actually benefit from
the interaction that they are having with their teacher. In many cases, a sum-
mative assessment provides little opportunity for constructive feedback, and
there is substantial evidence that many students fail to engage with the feed-
back that comes with summative assessments (Price et al., 2011).
• How can we know that the individual asking for academic credit has created
the work being assessed? Personally, I think that this increasingly provides
higher education’s greatest challenge (Löfström et al., 2015).
10.4.2 Evaluation
For me, the obligation to evaluate my teaching is not part of a neoliberal plot
to commodify my contribution to higher education, nor a managerial process
imposed on me to demonstrate my effectiveness, although for others it may be
these things. For me, the evaluation of my teaching and of the consequences
of my teaching is a part of being a professional and is closely allied to my
commitment to scholarship. I’m interested in how well I do what I do and how
to improve what I do. A great deal of research has been undertaken on how best
to evaluate teaching in higher education, but in general, most researchers come to
the conclusion that the evaluation process should make use of a diverse range of
indicators. In my preferred order of importance, these are the following:
Self-evaluation: This category owes much to the work of Schön in developing
the idea of the reflective practitioner (Schön, 1983). Reflective practitioners
think deeply about what they’re doing and what they’ve done. This deep
thinking provides higher education teachers with a great deal of insight
into how they’re teaching and how effectively this is converted into
learning. Many professions require their professionals to keep portfolios,
and for many of us, the teaching portfolio is an important tool to help us
Downloaded from https://www.cambridge.org/core. University of Sussex Library, on 02 Mar 2019 at 11:05:34, subject to the Cambridge Core terms of use, available at
https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108654555.011
287
gather evidence of what we’ve done and how well we have done it, as well
as the vehicle within which an evidence-based reflective commentary can
be situated. If our teaching isn’t effective, we should be the first to know
about it.
Peer evaluation: Teaching in higher education should ideally not be a lonely
pursuit. Most of us work in teams to support our students, and most of
us encourage teamwork in our teaching. Peer review, or peer evaluation, is
not always an easy option, but if done well, it can greatly contribute to an
effective evaluation of teaching. Peers do not have to attend our lectures.
They can comment on our course designs or on the feedback that we give
to our students. They can second-mark our assessments or meet with our
students to discuss any concerns they may have.
Outcome measures: No matter how wonderful the teaching may or may not
appear to be, if all of the students fail their exams, then surely something is
amiss. Learning outcomes are, in my view, an essential element of an evalu-
ative process. But as with other contributors to evaluation, there do need
to be some checks and balances. No matter how well-intentioned, if the
same university teacher designs the course, teaches the course, and assesses
the course, they may not be able to maintain the level of objectivity neces-
sary for a fair evaluation. On the other hand, in some higher education
situations nowadays, higher education teachers are under a great deal of
pressure to maintain class numbers and to boost retention figures. It does
appear to me that some form of external oversight is an essential element
of evaluative contributions based on outcome measures. External oversight
may be in the form of an external assessor, external examiner, or a regular
contribution from a peer.
Feedback from students: In my own institution, it often appears as if the
only data relevant to the evaluation of teaching are from student feed-
back. Students are routinely asked to comment on how well organized
the teacher was, how well they stimulated interest, and a myriad of other
concerns that may or may not be relevant to an evaluation of teaching. As
well as contributing to the teachers’ reflection on teaching, they also con-
tribute in a quite direct way to promotion applications. I offer the opinion
that they are divisive and incorrectly applied in this way. Nevertheless,
student feedback on their experiences is no doubt an important element
of an evaluation of teaching, and most teachers are pleased to hear stu-
dent opinions on many facets of the learning and teaching environment.
By and large, I suggest that students’ opinions on how well organized
the teacher was are more valuable than our students’ opinions on, as
examples, the content of the course or the academic level of this content.
There is, of course, a danger that too much power in the hands of groups
of students, who are not as committed to the discipline being studied as
the teachers are, will contribute to softening or even dumbing down of
the teaching. In these situations, there needs to be something else that
balances this effect.
Downloaded from https://www.cambridge.org/core. University of Sussex Library, on 02 Mar 2019 at 11:05:34, subject to the Cambridge Core terms of use, available at
https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108654555.011
288
288 SH E PH ARD
Downloaded from https://www.cambridge.org/core. University of Sussex Library, on 02 Mar 2019 at 11:05:34, subject to the Cambridge Core terms of use, available at
https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108654555.011
289
Downloaded from https://www.cambridge.org/core. University of Sussex Library, on 02 Mar 2019 at 11:05:34, subject to the Cambridge Core terms of use, available at
https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108654555.011
290
290 SH E PH ARD
It does seem to me that CEd researchers reading this chapter would benefit
by reflecting on their own assumptions about critical thinking and its related
dispositions and about their role in developing them. Do you think, as examples,
that computer science teachers should be teaching their students not only the
skills involved in truth-seeking, but also the obligation to seek the truth? Are
you open-minded about such matters and happy to teach your students to be
open-minded? Will you do this openly, or will it be hidden within your approach
to computer science, to research, to teaching, or to life?
References
Biggs, J. (1999). What the student does: Teaching for enhanced learning. Higher
Education Research & Development, 18(1), 57–75.
Biggs, J., & Tang, C. (2007). Teaching for Quality Learning at University, 3rd edn.
Maidenhead, UK: Society for Research into Higher Education and Open
University Press.
Bloom, B. S., Hastings, J. T., & Madaus, G. F. (1971). Handbook on the Formative and
Summative Rvaluation of Student Learning. New York: McGraw-Hill.
Boyer, E. L. (1990). Scholarship Reconsidered: Priorities of the Professoriate. Princeton,
NJ: Carnegie Foundation for the Advancement of Teaching.
Boyer, E. L. (1996). From the scholarship reconsidered to scholarship assessed. Quest,
48(2), 129–139.
Cech, E. A. (2014). Education: Embed social awareness in science curricula. Nature,
505(7484), 477–478.
Driscoll, M. P. (2005). Psychology of Learning for Instruction. Boston, MA: Allyn and
Bacon.
Entwistle, N. (1997). Introduction: Phenomenography in higher education. Higher
Education Research & Development, 16(2), 127–134.
Facione, P. A. (1990). Critical Thinking: A Statement of Expert Consensus for Purposes
of Educational Assessment and Instruction. Millbrae, CA: The California
Academic Press.
Facione, P. A. (2000). The disposition toward critical thinking: Its character, measure-
ment, and relation to critical thinking skill. Informal Logic, 20(1), 61–84.
Gibbs, G., & Coffey, M. (2004). The impact of training of higher-education teachers on
their teaching skills, their approach to teaching and the approach to learning of
their students. Active Learning in Higher Education, 5(1), 87–100.
Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational
Research, 77(1), 81–112.
Havnes, A., & Prøitz, T. S. (2016). Why use learning outcomes in higher education?
Exploring the grounds for academic resistance and reclaiming the value of
unexpected learning. Educational Assessment, Evaluation and Accountability,
28(3), 205–223.
Hutchings, P., & Shulman, L. (1999). The scholarship of teaching: New elaborations,
new developments. Change, 31(5), 10–15.
Krathwohl, D., Bloom, B., & Masia, B. (1988). Taxonomy of Educational Objectives,
Handbook II: The Affective Domain. New York: David McKay Co.
Downloaded from https://www.cambridge.org/core. University of Sussex Library, on 02 Mar 2019 at 11:05:34, subject to the Cambridge Core terms of use, available at
https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108654555.011
291
Downloaded from https://www.cambridge.org/core. University of Sussex Library, on 02 Mar 2019 at 11:05:34, subject to the Cambridge Core terms of use, available at
https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108654555.011