Professional Documents
Culture Documents
(Issues in Higher Education) Maria João Rosa, Alberto Amaral (Eds.) - Quality Assurance in Higher Education - Contemporary Debates-Palgrave Macmillan UK (2014)
(Issues in Higher Education) Maria João Rosa, Alberto Amaral (Eds.) - Quality Assurance in Higher Education - Contemporary Debates-Palgrave Macmillan UK (2014)
Titles include:
Jürgen Enders and Egbert de Weert (editors)
THE CHANGING FACE OF ACADEMIC LIFE
Analytical and Comparative Perspectives
John Harpur
INNOVATION, PROFIT AND THE COMMON GOOD IN HIGHER EDUCATION
The New Alchemy
Tamsin Hinton-Smith
WIDENING PARTICIPATION IN HIGHER EDUCATION
Casting the Net Wide?
V. Lynn Meek
HIGHER EDUCATION, RESEARCH, AND KNOWLEDGE IN THE ASIA-PACIFIC
REGION
Guy Neave
THE EUROPEAN RESEARCH UNIVERSITY
Guy Neave
THE EVALUATIVE STATE, INSTITUTIONAL AUTONOMY AND RE-ENGINEERING
HIGHER EDUCATION IN WESTERN EUROPE
The Prince and His Pleasure
Maria João Rosa and Alberto Amaral (editors)
QUALITY ASSURANCE IN HIGHER EDUCATION
Contemporary Debates
Mary Ann Danowitz Sagaria
WOMEN, UNIVERSITIES, AND CHANGE
Snejana Slantcheva
PRIVATE HIGHER EDUCATION IN POST-COMMUNIST EUROPE
Sverker Sörlin
KNOWLEDGE SOCIETY VS. KNOWLEDGE ECONOMY
Bøjrn Stensaker, Jussi Välimaa, Clàudia Sarrico (editors)
MANAGING REFORM IN UNIVERSITIES
The Dynamics of Culture, Identity and Organisational Change
Voldemar Tomusk
THE OPEN WORLD AND CLOSED SOCIETIES
You can receive future titles in this series as they are published by placing a
standing order. Please contact your bookseller or, in case of difficulty, write to us
at the address below with your name and address, the title of the series and the
ISBN quoted above.
Customer Services Department, Macmillan Distribution Ltd, Houndmills,
Basingstoke, Hampshire RG21 6XS, England
Quality Assurance
in Higher Education
Contemporary Debates
Edited by
and
Alberto Amaral
Full Professor, A3ES, CIPES and University of Porto, Portugal
Selection, introduction and editorial matter © Maria João Rosa and
Alberto Amaral 2014
Individual chapters © Respective authors 2014
Softcover reprint of the hardcover 1st edition 2014 978-1-137-37462-2
All rights reserved. No reproduction, copy or transmission of this publication
may be made without written permission.
No portion of this publication may be reproduced, copied or transmitted
save with written permission or in accordance with the provisions of the
Copyright, Designs and Patents Act 1988, or under the terms of any licence
permitting limited copying issued by the Copyright Licensing Agency,
Saffron House, 6–10 Kirby Street, London EC1N 8TS.
Any person who does any unauthorized act in relation to this publication
may be liable to criminal prosecution and civil claims for damages.
The authors have asserted their rights to be identified as the authors of this
work in accordance with the Copyright, Designs and Patents Act 1988.
First published 2014 by
PALGRAVE MACMILLAN
Palgrave Macmillan in the UK is an imprint of Macmillan Publishers Limited,
registered in England, company number 785998, of Houndmills, Basingstoke,
Hampshire RG21 6XS.
Palgrave Macmillan in the US is a division of St Martin’s Press LLC,
175 Fifth Avenue, New York, NY 10010.
Palgrave Macmillan is the global academic imprint of the above companies
and has companies and representatives throughout the world.
Palgrave® and Macmillan® are registered trademarks in the United States,
the United Kingdom, Europe and other countries.
ISBN 978-1-349-47702-9 ISBN 978-1-137-37463-9 (eBook)
DOI 10.1057/9781137374639
This book is printed on paper suitable for recycling and made from fully
managed and sustained forest sources. Logging, pulping and manufacturing
processes are expected to conform to the environmental regulations of the
country of origin.
A catalogue record for this book is available from the British Library.
A catalog record for this book is available from the Library of Congress.
1 Introduction 1
Maria João Rosa and Alberto Amaral
Part I The Frontier and Its Shifts
2 Where Are Quality Frontiers Moving to? 13
Alberto Amaral
3 Quality Enhancement: A New Step in a Risky Business?
A Few Adumbrations on Its Prospect for Higher Education
in Europe 32
Guy Neave
Part II New Challenges, New Instrumentalities
4 Transparency about Multidimensional Activities
and Performance: What Can U-Map and
U-Multirank Contribute? 53
Don F. Westerheijden
5 Assessment of Higher Education Learning Outcomes
(AHELO): An OECD Feasibility Study 66
Diana Dias and Alberto Amaral
6 Risk, Trust and Accountability 88
Colin Raban
7 Risk Management: Implementation 106
Anthony McClaran
8 Quality Enhancement: An Overview of Lessons
from the Scottish Experience 117
Murray Saunders
Part III Regional Setting
9 European Trends in Quality Assurance:
New Agendas beyond the Search for Convergence? 135
Bjørn Stensaker
v
vi Contents
Index 251
List of Figures and Tables
Figures
Tables
viii
Notes on Contributors ix
Accreditation Agency since the same year, as well as for the Institutional
Evaluation Programme of the European University Association since
2012. Liliya Ivanova was a member of the Executive Committee of the
European Students Union in 2012–2013.
This volume comprises four parts. The first two chapters (by Alberto
Amaral and Guy Neave) provide a broad panorama of recent developments
Introduction 3
References
Bennett, D. (2001) ‘Assessing Quality in Higher Education’, Liberal Education,
87(2), 1–4.
Neave, G. (1988) ‘On the Cultivation of Quality, Efficiency and Enterprise: An
Overview of Recent Trends in Higher Education in Western Europe 1986–1988’,
European Journal of Education, 23(2/3), 7–23.
OECD (2009) Assessment of Higher Education Learning Outcomes (Paris: OECD).
Schwarz, S. and Westerheijden, D. (2004) Accreditation and Evaluation in the
European Higher Education Area (Dordrecht: Kluwer Academic Press).
Part I
The Frontier and Its Shifts
2
Where Are Quality Frontiers
Moving to?
Alberto Amaral
Introduction
Neave has argued that ‘quality is not “here to stay” if only for the self-
evident reason that across the centuries of the university’s existence in
Europe, it never departed’ (Neave, 1994, p. 16), and that evaluation is
‘an intrinsic part of policy making’ (Neave, 1998, p. 265). Indeed, quality
has been a permanent concern of universities from the early days of
their foundation.
In the Middle Ages it was already possible to distinguish three
major models of quality assurance. The old universities of Oxford and
Cambridge were self-governing communities of scholars that had the
right to remove unsuitable masters and to co-opt new members using
the equivalent of peer review mechanisms. The University of Paris,
where the chancellor of the cathedral of Notre Dame had the power to
rule on the content of studies, might be seen as the archetype of quality
assessment in terms of accountability. And the model of the University
of Bologna, ruled by students who hired the professors on an annual
basis, controlling their assiduity and the quality of their teaching, might
be seen as an extreme example of the principles presently in vogue of
customer satisfaction.
However, it was after the early 1980s that quality became a public
issue, giving rise to what Neave (1996) denominated the emergence
of the Evaluative State. This development can be explained as conse-
quence of a number of convergent factors such as massification – which
created much more heterogeneous higher education systems in terms
of institutions, students and professors – the increasing role of market
regulation, the emergence of new public management and a loss of trust
in higher education institutions and their professionals.
13
14 Quality Assurance in Higher Education
Trust
The level of trust between higher education institutions and state and
society plays an important role in determining the major characteristics
of quality assessment systems.
Neave (1994, 1996) proposed a law of anticipated results to explain
the behaviour of institutions that try to guess what will be required
by government policy and act in anticipation, making it difficult to
determine whether change is actually imposed from the top down. The
conduct of institutions frequently gives ‘the impression of autonomous
institutional action to what is in fact an institutional reaction to actual
or anticipated external forces, directives or events’ (Meek, 2002, p. 250).
However, the success of institutions depends strongly on the level of
trust they enjoy from the government.
In the Netherlands the strong trust between government and insti-
tutions allowed Dutch universities to claim for themselves the major
responsibility for quality, convincing the Ministry that they should con-
trol the quality assurance system through an independent agency, the
VSNU. Neave (1994, p. 127) presents the case of the Flemish universities
as ‘a remarkable example of the Law of Anticipated Results’. Flemish
universities anticipated the government’s movements in quality by
initiating a quality assessment system in collaboration with the Dutch
Where Are Quality Frontiers Moving to? 15
Recent developments
The design of the ranking system intends to follow the ‘Berlin Principles
on the ranking of higher education institutions’, which stress the need
to take into account ‘the linguistic, cultural, economic and historical
contexts of the educational systems being ranked’. The approach is
to compare only institutions that are similar in their missions and
structures. The project is linked to the idea of a European classifica-
tion (‘mapping’) of higher education institutions. The feasibility study
includes focused rankings on particular aspects of higher education at
institutional level (e.g., internationalisation and regional engagement),
and two field-based rankings for business and engineering programmes.
As Kaiser and Jongbloed explain:
Risk management
Risk management is a process imported from business. It aims to iden-
tify, assess and prioritise risks in order to create plans to minimise or
even to eliminate the impact of negative events. Risk management is
widely used by actuarial societies and more recently by government and
the public sector too.
The Quality Enhancement Framework (QEF) was introduced in
Scotland in 2003. This emphasises ‘the simple and powerful idea that
the purpose of quality systems in higher education is to improve stu-
dent experiences and, consequently, their learning’ (QAA Scotland,
2008, p. 1). It is interesting to note that the QEF introduces the notion
of risk: enhancement is the result of change and innovation that will
frequently involve risk. Institutions are expected to manage this risk
in a way that provides reasonable safeguards for current students. The
review process will continue to recognise and support effective risk
management and adopt a supportive and not punitive role in this con-
text (QAA Scotland, 2008, p. 4).
The 2005 Quality Risk Management Report (Raban et al., 2005, p. 5)
states that as early as 1998 there was a reference to academic risk and
its management:
Although the White Paper states that ‘all higher education providers
must continue to be part of a single assurance framework’ (BIS, 2011,
p. 37) it proposes that the risk of each institution must be assessed and
that the level of risk will determine the frequency of QAA’s reviews.
Institutions with low risk – with a demonstrable record of high-quality
provision – will be subject to less frequent full institutional reviews than
new providers or institutions offering lower quality of provision. At the
same time, the document proposes the implementation of a set of ad
hoc triggers that will determine the intervention of QAA for conducting
an immediate partial or full review whenever there are concerns about
compliance with quality standards.
The White Paper (BIS, 2011) raises serious concerns. On the one hand
it is possible that trust in institutions is in danger of being sacrificed to
the aim of appeasing students who were recently asked to pay a larger
contribution to the costs of education. On the other hand the risk-based
approach raises concerns that the new system will no longer address
quality enhancement for the whole system. Instead of quality enhance-
ment, robust quality assurance procedures will be focused on detecting
and eliminating those cases where quality standards are at risk. That
is why both ‘trust – building on staff’s professional values and their
aspirations – and dialogic accountability are themselves preconditions
for enhancement, risk assessment and the effective local management
of risk’ (Raban et al., 2005, p. 50). See Chapters 5 and 6 for a more
detailed discussion of risk management.
Despite what has been mentioned, the shifting frontier of quality assess-
ment is not exclusive to the UK. Similar dynamics are no less evident
in Europe, Latin America and the United States. They open up a wider
26 Quality Assurance in Higher Education
perspective and also offer the opportunity for cross regional comparison
and to take stock of the views and opinions of different stakeholders
(agencies, academics and students) about changes taking place in the
quality domain. Developments in the United States deserve careful
scrutiny in particular, not least because of its long history of quality
processes dating from the nineteenth century.
Survey answers from academics regarding their perceptions of the effects
of internal quality management show that they support the idea that
quality systems should promote quality improvement and innovation in
higher education (Rosa, Sarrico and Amaral, 2011). And the promotion
of innovation and flexibility and reliance on internal quality systems is
compatible with the QE approach. However, changes in the governance of
higher education institutions under the influence of NPM have strongly
decreased or even eliminated collegiality and made academics more like
employees and less like professionals. This enforced weaker dedication
of academics to governance may very well have a negative effect on
collegial time dedicated to assuring and improving academic standards.
Students also play an important role in the developments of European
higher education, namely through the activities of the European
Students Union. The courage of students to criticise openly the Leuven
Communiqué while the representatives of higher education institutions
kept silent could be seen as an example of the capacity of the younger
generations to shape and improve European policies.
The fast development of information technology may also be a fac-
tor in quality assurance processes. One example is the emergence of
MOOCs (Massive Open Online Courses) that are a form of ‘direct-to-
students’ education, ‘removing faculty from the heart of the student
experience’ (Eaton, 2012) and relying on the students’ initiative to get
what they can from their learning experience. However, so far MOOCs
only offer students ‘badges’ certifying their mastery of skills in some areas
and there are only very limited cases of awarding credits for MOOCs
(one example is Colorado State University-Global campus). At present,
none of the US accreditation agencies accredit elements of courses, and
they still consider that faculty have a very important role in students’
educational experiences. CHEA has recently opened a discussion on the
possible accreditation of MOOCs as a tool for judging their quality.
Conclusions
References
Amaral, A. and Rosa, M.J. (2004) ‘Portugal: Professional and Academic
Accreditation – The Impossible Marriage?’, in S. Schwarz and D. Westerheijden
(eds), Accreditation and Evaluation in the European Higher Education Area
(Dordrecht: Kluwer Academic Press), pp. 127–57.
Amaral, A. (2007) ‘From Quality Assurance to Accreditation – A Satirical View’,
in J. Enders and F. van Vught (eds), Towards a Cartography of Higher Education
Policy Change (UNITISK, Czech Republic), pp. 79–86.
Amaral, A. and Neave, G. (2009a) ‘On Bologna, Weasels and Creeping
Competence’, in A. Amaral, G. Neave, C. Musselin and P. Maassen (eds),
European Integration and the Governance of Higher Education and Research
(Dordrecht: Springer), pp. 271–89.
Amaral, A. and Neave, G. (2009b) ‘The OECD and Its Influence in Higher
Education: A critical revision’, in A. Maldonado and R. Bassett (eds),
International Organizations and Higher Education Policy: Thinking Globally, Acting
Locally? (London and New York: Routledge), pp. 82–98.
Where Are Quality Frontiers Moving to? 29
HEFCE (2001) Risk Management: A Guide to Good Practice for Higher Education
Institutions (London: HEFCE).
Kaiser, F. and Jongbloed, B. (2010) ‘New transparency instruments for European
higher education: The U-Map and the U-Multirank projects’, paper presented
to the 2010 ENID Conference, 8–11 September 2010.
Kassim, H. and Menon, A. (2002) ‘The Principal-Agent Approach and the Study
of the European Union: A Provisional Assessment’, Working Paper Series.
Birmingham: European Research Institute, University of Birmingham.
Kuh, G.D. (2008) High-Impact Educational Practices: What They Are, Who Has
Access to Them, and Why They Matter (Washington, DC: Association of
American Colleges and Universities).
Le Grand, J. and Bartlett, W. (1993) Quasi Markets and Social Policy (London:
Macmillan Press).
Leuven Communiqué (2009), http://www.ond.vlaanderen.be/hogeronderwijs/
bologna/conference/documents/leuven_louvain-la-neuve_communiqué_
april_2009.pdf (accessed 22 September 2013).
Martens, K., Balzer, C., Sackmann, R. and Weymann, A. (2004) Comparing
Governance of International Organisations – The EU, the OECD and
Educational Policy, TransState Working Papers No.7, Sfb597 ‘Staatlichkeit im
Wandel (Transformations of the State)’, Bremen.
Meek, L. (2002) ‘On the Road to Mediocrity? Governance and Management of
Australian Higher Education in the Market Place’, in A. Amaral, G. A. Jones and
B. Karseth (eds), Governing Higher Education: National Perspectives on Institutional
Governance (Dordrecht: Kluwer Academic Publishers), pp. 235–60.
Neave, G. (1992) ‘On Bodies Vile and Bodies Beautiful: The Role of “Buffer”
Organisations’, Higher Education Policy, 5(3), 10–11.
Neave, G. (1994) ‘The Policies of Quality: Development in Higher Education in
Western Europe 1992–1994’, European Journal of Education, 29(2), 115–34.
Neave, G. (1996). ‘Homogenization, Integration and Convergence: The Cheshire
Cats of Higher Education Analysis’, in V.L. Meek, L. Goedegebuure, O. Kivinen,
and R. Rinne (eds), The Mockers and the Mocked: Comparative Perspectives
on Differentiation, Convergence and Diversity in Higher Education (London:
Pergamon Press), pp. 26–41.
Neave, G. (1998). ‘The Evaluative State Reconsidered’, European Journal of
Education, 33(3), 265–84.
Neave, G. (2004) ‘The Temple and Its Guardians: An Excursion into the Rhetoric
of Evaluating Higher Education’, The Journal of Finance and Management in
Colleges and Universities, 1, 211–27.
OECD (2008) Assessment of Learning Outcomes in Higher Education: A Comparative
Review of Selected Practices (Paris: OECD).
OECD (2009a) Assessment of Higher Education Learning Outcomes (Paris: OECD).
OECD (2009b) Analytical Framework for the Contextual Dimension of the AHELO
Feasibility Study (Paris: OECD).
Pascarella, E.T. and Terenzini, P.T. (1991) How College Affects Students (San
Francisco: Jossey-Bass).
Pascarella, E.T. and Terenzini, P.T. (2005) How College Affects Students, Volume 2
(San Francisco: Jossey-Bass).
QAA (1998) ‘The Way Ahead’, Higher Quality, 4, October.
Where Are Quality Frontiers Moving to? 31
Introduction
What are the prospects and benefits, advantages and promise that the
application of Quality Enhancement and the advent of Risk Analysis
may both bring with them as new and significant additions to the
instrumentality of the Evaluative State? Like most issues that have to
do with weighing up of Quality, and with the conditions and criteria
associated with valorising knowledge, the implications that follow from
the way the happy descriptor is operationalised and the implications
that in turn, flow from the process of operationalisation are, to say the
least, delicate. They are delicate, given the economic situation most of
our higher education systems currently confront. This situation they
have in varying degrees had to face over the past four years or more.
Even the most unbridled of economists can give no clear statement as
to how long the situation is likely to last.
One of the more salient features of the Evaluative State is the weight it
places on ‘policy as action’ as opposed to ‘policy as reflection’. Absence
of speed, failure to fall in with the expeditive ethic, we have been told
these 20 years past until we are all blue in the face, is a manifest evi-
dence of inefficiency, of resistance to change, of obduracy in the face of
the beneficent workings of the Prince and his efforts to harness higher
education to speeding up the transition of our nations towards the
Knowledge Economy. Concentration on other than the immediate and
the short term is not always the essence of our business. And policy as
reflection tends sometimes to be seen as ‘swinging the lead’, as derelic-
tion of duty and an implied unwillingness wholeheartedly to embrace
the responsibilities the Prince wishes us to assume.
32
A New Step in a Risky Business? 33
Yet, this is precisely our task: to step aside from ‘policy as action’.
Instead, we have the opportunity to examine what the main construct
for operationalising and developing quality, efficiency and enterprise
(Neave, 1988) – namely, the Evaluative State – has achieved. And within
that broader framework, to weigh up the significance that Quality
Enhancement and risk factoring hold out for shaping it further.
There are many ways we can move on this. I, for my part, will move
in from a long-term perspective to these issues by taking an historical
approach. Historians are sometimes useful for holding up such a mirror.
But this can also be a risky business. As with Caliban in The Tempest, the
holding up of mirrors tends to enrage those who see such reflections as
caricatures. Still, if you want to know where you ought to go, it is as well
to know how you came from where you have come. It is sometimes a
consoling experience.
driving forces that came together to form the Evaluative State. I will
examine variations in the aims and purposes that different nations laid
upon their edition of the Evaluative State.
In theory, three sets of assumptions are made. The first, often argued
by international agencies, is that individual techniques, procedures and
practices are themselves ‘value neutral’ on the grounds of their objec-
tive or quantifiable nature. A variation on this line of argument holds
that such items must necessarily be introduced because they show a
proven efficiency in fulfilling the successful attainment of objectives in
one system that a second seeks to attain. It is more blessed to receive
than to give. Not surprisingly, there is a negative version to this cal-
culus, namely, that if one is not blessed by the receiving, one is most
certainly cursed if one rejects it. This latter line of persuasion is often
brought to bear in urging individual nations further down the path of
the Bologna and Lisbon agendas. It is known in the trade as ‘naming
and shaming’ (Gornitzka, 2007). Finally, there are the assumptions the
donor implicitly makes. These assumptions are not greatly dissimilar
from those made by the receiver, with one additional and very con-
siderable one taken for granted. Precisely because the ‘donor’ holds
himself to be successful, he presumes that in part such success may be
attributed to the practices, instrumentality and ways of proceeding he
has devised and which are ‘tested and proven’. They have made him
primus – or secundus – inter pares. So the same happy outcome – or, the
avoidance of continued national ignominy – will follow as a result of
others following his example. This is determinism of a very high order.
A New Step in a Risky Business? 37
What the donor tends to play down in this higher education version of
‘la mission civilisatrice’ is that the practices he offers are themselves the
outcome of negotiations that rested on cultural, political, historic and
for that matter economic norms that underlie and permeate his own
higher education system. No less important, it is precisely these norms
and the margin of manoeuvre they permit that shape the way decisions
were reached in the first place (Neave, 2012b, p. 158). They are not
always the same elsewhere.
Part two
Stage 1: origins, scope and purpose
When we examine the early moves towards the Evaluative State, the
observer is struck by the marked differences in rationale that drove it
forward as well as the differences in strategic scope and purpose. The
construction of the Evaluative State mirrored the quest for quality, effi-
ciency and enterprise in higher education (Neave, 1988, pp. 7–23). True,
no other European State went as far as Portugal did by nailing the flag
of quality assessment to the mast of higher education and including it
in the Constitution of 1976 under the heading of article 76 paragraph 2
(Amaral and Carvalho, 2008). Thus, arguably Portugal’s drive towards
the Evaluative State, which incidentally began at Porto almost 20 years
ago, built on a degree of formal continuity largely absent in France,
Britain and the Netherlands. Until the promulgation of the Portuguese
Higher Education Guideline Law of 2007, which reorganised Portugal’s
universities and polytechnics around the tenets of Neo-Liberalism – that
is, competitive deflation, market flexibility and de-regulation (Gayon,
2012) – shaping the Evaluative State sought more to improve estab-
lished patterns of authority in higher education. The beginnings of the
Portuguese Evaluative State were far more ‘Root and Branch’ in nature.
Likewise with the earliest example of the drive towards the Evaluative
State – in France. The first step was the creation, in 1984, of the
Comité National d’Evaluation – an independent body reporting not to
the Minister in charge of Higher Education, but to the President of
the Republic. The CNE was the launchpad for the French Evaluative
State and introduced systematic external review of higher education
(Staropoli, 1987, pp. 127–31). Whilst its purpose was very clearly to
‘enhance quality’ its objective did not, as current English and Scottish
initiatives propose, focus on those aspects, which elsewhere fall under
the rubric of Hochschuldidaktik. Rather, the French interpretation of
enhancing quality focused on the ‘delivery’ of new courses to meet
spiralling student numbers and greater diversity of demand in a sys-
tem that was rapidly moving from mass to universal higher education.
Initial priorities sought to enhance the quality of provision by speeding
up the rate of delivery.
A New Step in a Risky Business? 39
The second stage in shaping the Evaluative State, both in Europe and
Great Britain, involved two dimensions: the internal refinement of pro-
cedures and the definition of ownership. Viewed schematically, each
passed through two stages. During Stage 1, the development of internal
procedures entailed a detailed and systematic review of individual HEIs,
a painstaking, time-consuming and costly procedure. In France, institu-
tional review was extended to cross system reviews of displinary areas
and higher education’s performance in particular regions. In retrospect,
Stage 1 was an exercise in mapping out, identifying and validating a
limited number of indicators that were both discriminatory – in the
precise meaning of that term – and sensitive. The burden of Stage 2
was to set in place benchmarks or standards of expected performance.
Certain systems, such as Sweden, also saw proposals to ‘lighten the
review cycle’ and convert it into ‘alert system’ for identifying and fully
examining only those establishments that showed obvious difficulty
(Högskolverket, 2005).
Key to Stage 2 in the saga of the Evaluative State, was the ‘relocation of
ownership’. This took place relatively speedily in the UK, but was more
protracted in France and Portugal. It saw the placing of responsibility
for refining assessment procedures into Agencies of Public Purpose: the
British Quality Assurance Agency in 1997, and the Portuguese Conselho
Nacional de Avalacao do Ensino Superior in 1998. In France, the gradual
ousting of the Comite National d’Evaluation from its original status of
relative independence was cautious and incremental. Nevertheless, its
merger in 2007 with the Association pour l’Evaluation de la Recherche
et de l’Enseignement Superieur (AERES) effectively moved it back into the
national process of policy formation rather than standing as honest
broker to one side of it (Neave, 2012a, p. 198).
In both France and Portugal, redefinition of ownership and its
administrative location assumed the weight of law: in France with the
Law of 10 August 2007, which reorganised the ‘new university’, and
in Portugal, exactly one month later, with the passing of the Higher
Education Guideline Law. From the standpoint of the adepts of Neo
Liberalism and more explicitly, New Public Management, here were
very satisfactory examples of ‘sinners come to repentence’. Legislation
moved these two systems firmly on to Stage 2 in the development of
the Evaluative State.
that has driven the Evaluative State forward over the past two decades.
What light does this sustained dynamic shed on Quality Enhancement?
Is the way Quality Enhancement is currently construed necessarily the
last word to be had on it? This is highly unlikely, above all in times of
unprecedented economic crisis. There are two very good reasons for tak-
ing this view. The first stems from the intelligence the Evaluative State
now possesses about the immediate and present condition of higher
education. The second is a derivative function of the first: namely, the
intelligence available about the state of higher education may also be
used to weigh up and assess the appropriateness of national policies that
have brought it to this condition (Neave, 2012a, pp. 138–39). Whether
Quality Enhancement can be seen as a remedy to previous oversights,
others are better placed than I to give an answer.
In Stages 2 and 3 the Evaluative State forged new instruments for plot-
ting performance, output, institutional achievement and cost, and in
certain instances tracked student transition from higher education to
work. Such a battery of instruments serves various agendas: account-
ability checking ‘the reaction time’ to national priorities and providing
a back-channel for indirect ‘steering’.
New instruments do not just bring new insights and new norms of
institutional performance. They also bring with them new perceptions
of higher education as well as embedding them in higher education’s
discourses. They provide a new account and thus new explanations for
institutional behaviour. As a potential instrumentality and as a new
lease of life – or prospect of death – in the groves of academe, risk taking
opens a new and hitherto unbroached possibility: that of institutional
failure.
It is unkind to point out that in their intent to open up higher edu-
cation completely to ‘the market’, government and its advisers also
admit that failure is the price that may have to be paid. Unkind though
it is, higher education, like the Scout, must ‘be prepared’. The cynic
will point out that it is less devastating for institutions to fail in a fully
market driven system, than to have them fail in a system partially sup-
ported by public finance. Institutional failure is after all proof of the
purgative effects of competition. But whether the government can be
made responsible for the débâcle is far less evident in a higher education
system fully driven by market forces than when higher education is
financed from public pennies.
A New Step in a Risky Business? 47
The salient feature of risk taking is not that it stands as yet another
example of grafting techniques and a dead vocabulary, forged in the
corporate sector, onto higher learning and research. Once risk taking as
a technique and as an instrument are injected into higher education,
institutional failure no longer reflects the inadequacies of public policy. It
reflects, rather, the incompetence of the individual university, its leader-
ship, its teaching staff, its ‘goods and chattels, ox, ass, man servant, maid
servant and all that in it is’. If the English government’s avowed intent
to proceed to a fully market driven system is taken in conjunction with
risk taking as an institutional responsibility extended to the academic
domain, from a broader strategic perspective the juxtaposition takes on
all the dimensions of a ‘damage limitation exercise’. If some institutions
fail – and it would be exceedingly good to know what the operational
definitions of failure are – others will nevertheless succeed. Responsibility
for failure falls on the individual institution, not on the consequences
of national policy. Thus, risk taking fences off institutional failure from
policy failure. Instead, the responsibility for the situation national policy
creates is ‘offloaded’ onto precisely those individual institutions least
able to deal with the situation that policy has created.
Still, from an historical perspective, risk calculus has immense symbolic
importance. This lies in the final evaporation of that vital optimism that
has driven higher education forward over the past 50 years. Optimism is
now in cold storage for at least the next evaluatory cycle. With the sober
contemplation of failure, we have also to contemplate what in France is
known as ‘la fin des trente glorieuses’. The three ‘golden decades’, from 1950
to 1980, are definitely over. How long the blizzard will last, not even the
most canny economist or hedge fund director will hazard an opinion.
Risk calculus, I would suggest, is redolent with technocratic pessi-
mism. Realistic it might be; unavoidable, even. But by admitting the
possibility of institutional failure, we turn our backs on the 50-year
adventure that drove higher education onward and upward. Whether
risk calculation is another way of using the market to ration higher
education, only time will tell. From now on, higher education is indeed
a risky business.
References
Amaral, A. and Carvalho, T. (2008) Autonomy and Change in Portuguese Higher
Education (Matosinhos: CIPES).
Becher, T. (1989) Academic Tribes and Territories: Intellectual Enquiry and the
Cultures of Disciplines (Milton Keynes: Open University Press).
48 Quality Assurance in Higher Education
Chevaillier, T. (2004) ‘The Changing Rôle of the State in French higher educa-
tion: from Curriculum Control to Accreditation’, in S. Schwartz-Hahn and
D. Westerheijden (eds), Accreditation and Evaluation in the European Higher
Education Area (Dordrecht: Kluwer Academic Publishers), pp. 159–74.
Clark, B.R. (1983) The Higher Education System: Academic Organization in Cross-
National Perspective (Berkeley, Los Angeles and London: University of California
Press).
Clark, B.R. (1998) Creating Entrepreneurial Universities: Organizational Pathways of
Transformation (Oxford: Elsevier for IAU Press).
Clark, B.R. (2003) Sustaining Change in Universities: Continuities in Case Studies and
Concepts (Milton Keynes: Open University Press for SRHE).
Dill, D.D. (1997) ‘Focusing Institutional Mission to Provide Coherence
and Integration’, in M. Peterson, D.D. Dill and L. Mets (eds), Planning
and Management for a Changing Environment (San Francisco: Jossey-Bass),
pp. 171–90.
Gayon, V. (2012) ‘Le château de La Muette: enquête sur une citadelle du con-
formisme intellectuel’, Le Monde Diplomatique, July.
Gornitzka, Å. (2007) ‘The Lisbon Process: A Supra National Policy Perspective’,
in P. A.M. Maassen and J.P. Olsen (eds), University Dynamics and European
Integration (Dordrecht: Springer Books), pp. 55–178.
Heitor, M. and Horta, H. (2011) ‘Science and Technology in Portugal: From
late Awakening to the Challenge of Knowledge-Integrated Communities’, in
G. Neave and A. Amaral (eds), Higher Education in Portugal A Nation, A Generation
1974–2009 (Dordrecht and Heidelberg: Springer Books), pp. 179–226.
Högskolverket (2005) The Evaluation Activities of the National Agency for Higher
Education in Sweden. Final Report by the International Advisory Board (Stockholm:
Högskolverket).
Jarratt Report (1985) Steering Committee for Efficiency Studies in Universities
(Chairman Sir Alex Jarratt) (London: CVCP).
Moodie, G. and Eustace, R. (1985) Power and Authority in British Universities. The
Development of Higher Education into the 1990s (London: HMSO).
Nardi, P. (1992) ‘Relations with Authority’, in H. de Ridder Simoens (ed.),
A History of the University in Europe, Volume 1, Universities in the Middle Ages
(Cambridge: Cambridge University Press), pp. 280–306.
Neave, G. (1988) ‘On the cultivation of quality, efficiency and enterprise: An
overview of recent trends in higher education in Western Europe 1986–1988’,
European Journal of Education, 23(2/3), 7–23.
Neave, G. (1996a) ‘The Evaluation of the Higher Education System in France’, in
R. Cowen (ed.), World Yearbook of Education 1966: The Evaluation of Systems of
Higher Education (London· Kogan Page), pp. 66–81.
Neave, G. (1996b) ‘Homogenization, Integration and Convergence: The Cheshire
Cats of Higher Education analysis’, in V. Lynn Meek, L. Goedegebuure,
O. Kivinen and R. Rinne (eds), The Mockers and Mocked: Comparative Perspectives
on Differentiation, Convergence and Diversity in Higher Education (Oxford:
Pergamon), pp. 26–41.
Neave, G. (2012a) The Evaluative State, Institutional Autonomy and Re-engineering
Higher Education in Western Europe: The Prince and His Pleasure (Basingstoke and
New York: Palgrave Macmillan).
A New Step in a Risky Business? 49
Introduction
53
54 Quality Assurance in Higher Education
are: how much transparency, about what, to whom, and for what
stakeholder purposes? It seems tautological that with different stake-
holders, there will not be a single answer to these questions. In practice,
that means that there cannot be a single transparency tool that satisfies
all stakeholders’ information needs.
The transparency tools that we discuss in this chapter, U-Map and
U-Multirank, aim to be multiple tools to different users, packaged in
single ‘engines’. To understand their difference from current ranking
systems, we need to investigate briefly the basic concept of diversity of
higher education first, and remind ourselves of some basics of process
analysis next.
Excursion to concepts
Diversity
Usually, if no further explanation is given, diversity in higher educa-
tion is understood in a vertical sense. ‘Better’ or ‘worse’ is emphasised,
leaving in the dark – more or less – whether this is about prestige,
activities or performance, or a mix of all three at the same time. As
a result of vertical differentiation, rankings are likely to contribute to
wealth inequality and expanding performance gaps among institutions
(Van Vught, 2008). On the one hand, rankings and especially league
tables purport to show inequality among institutions that would be
hard to distinguish otherwise; universities are created equal (legally)
and regulation as well as funding formulae often aim to maintain this
‘legal homogeneity’ (Neave, 1995). On the other hand, rankings try to
create artificial lines, showing that like is not alike, which implies the
danger of becoming institutionalised and thus creating real differences
(Espeland and Sauder, 2007). Similarly, rankings have exacerbated com-
petition for the leading researchers and best younger talent, and are
likely to drive up the price of high-performing researchers and research
groups (Marginson, 2006), making them financially affordable only for
the richest institutions. The conceptual framework focusing on vertical
differentiation therefore creates a ‘Matthew effect’ (Matthew 13:12); that
is, a situation where already strong institutions are able to attract more
resources from students (for example, increase tuition fees), government
agencies (for example, research funding) and third parties, and thereby
strengthen their market position even further. Hazelkorn has shown
that policy-makers and institutional managers react to rankings –
whatever their merits and demerits – in ways conducive to creating a
‘Matthew effect’ (Hazelkorn, 2011).
56 Quality Assurance in Higher Education
The lure of league tables as the most common form of ranking is that
they promise a simple way to show which institutions are the best. For
consumers of information this is enticing indeed, because this form of
information is highly efficient (1 is better than 2, which is better than 3,
and so on) and does not demand a high investment of time and effort
on the users’ side to understand how higher education institutions are
working.
In the way current rankings are created, indicators of research
productivity are in terms of journal articles registered in international
databases (largely ignoring books and other products of research, as well
as most articles not written in English). As a consequence, the existing
‘[g]lobal rankings suggest that there is in fact only one model that can
have global standing: the large comprehensive research university’ (Van
der Wende and Westerheijden, 2009), focused on hard science fields that
adhere to the communication model that focuses on English-language
journal articles. This leads me to the concept of horizontal diversity,
which stresses similarities and differences in institutional missions and
profiles, expressed in, for example, different mixes of disciplines and
study programmes.
Transparency about horizontal diversity aims to group higher educa-
tion institutions by developing nominal distributions among a number
of classes or characteristics without any (intended) order of prefer-
ence. Classifications give descriptive categorisations of characteristics,
intending to focus on the efforts and activities of higher education and
research institutions, according to the criterion of similarity. After all, a
society needs nursing schools as much as medical university faculties for
an operation room team to function successfully. The worldwide model
of classifications, the Carnegie classification of higher education institu-
tions in the United States (www.carnegiefoundation.org/classifications),
was introduced in 1973 as a tool for researchers; over the years, it
turned into a major, authoritative concept for all of the United States
and beyond (McCormick and Zhao, 2005). The success of the Carnegie
classification is due to the fact that the Carnegie Foundation has a
generally accepted authority as an ‘objective’, that is disinterested,
think tank on higher education. This success means that the Carnegie
classification has become understood by the general public even more
as a ranking of vertical diversity, thereby driving American higher edu-
cation institutions to become ‘doctoral granting’ universities if they
wanted to maintain public (and political) prominence. To counter
this perverse effect of its success, the 2005 version of the Carnegie
classification was radically changed to reflect (again) that it wanted
Transparency about Multidimensional Activities 57
U-Map
The European U-Map classification has been developed since 2005.
U-Map is a user-driven, multidimensional European classification
instrument that allows all higher education (and research) institutions
to be characterised across six dimensions. By doing so, U-Map allows
for the creation and analysis of specific activity ‘institutional profiles’,
offering ‘pictures’ of the activities of an institution on the various
indicators of all six dimensions. U-Map can be accessed through two
interconnected online tools (a Profile Finder and a Profile Viewer) that
allow stakeholders to analyse the institutional profiles, for example for
Transparency about Multidimensional Activities 59
Regional engagement
Research involvement
Knowledge exchange
International orientation
Student profile
Figure 4.1 U-Map ‘sunburst charts’ comparing two higher education institutions
U-Multirank
In a leap beyond U-Map, development has begun on U-Multirank, a
multidimensional ranking of higher education institutions, meant to
be able to service institutions from around the world. A first project, a
proof of principle, ran between 2009 and 2011. The field study included
responses from 115 higher education institutions from around the
world; 29% of which were also represented in the top-500 of the ARWU
ranking (Academic Ranking of World Universities, commonly known as
the ‘Shanghai ranking’) (Van Vught and Ziegele, 2012, p. 137). The sec-
ond two-year phase, which upscaled U-Multirank to around 500 higher
education institutions, began at the end of 2012. If the second phase
proves successful, U-Multirank must stand on its own feet; the European
Commission, which supports the first two phases, does not intend to
get involved in continuous rankings of higher education worldwide.
The main type of question U-Multirank is designed to investigate is
how well higher education institutions are performing their different
tasks. From the activities portrayed in U-Map, we are moving here
to performances, that is, output and impact indicators. Again, in the
current state of development, we have also had to include some pro-
cess indicators. Then again, one person’s process is another person’s
output: for prospective students, for instance, the process of teaching
62 Quality Assurance in Higher Education
By 2012, U-Map had become operational and the roll out phase had
started. There was sufficient stability in the methodology and indica-
tors to focus on adding higher education institutions from different
(European) countries to the database. In 2013 the Profile Finder and
Profile Viewer tools became publicly operational, with over 300 higher
education institutions in the database.
Regarding U-Multirank, as mentioned, its first project was concluded
in 2011. That project was a ‘proof of concept’. Given the character of
this first project, ranking results have not been published. Moreover,
ranking a good 100 higher education institutions, or around 50 study
programmes in three fields of study would not make much sense.
The publication resulting from the project focused on the feasibility
of the indicators, data collection methods and so on (Van Vught and
Ziegele, 2012).
At the moment of writing, the second project, the first large-scale
implementation of U-Multirank, has started. By 2014 it is scheduled to
lead to a ranking that includes around 500 higher education institu-
tions from around the world.
64 Quality Assurance in Higher Education
Note
1. University X is better than University Y if and only if it has a higher score
on at least one indicator and not a single worse score (that is, there is weak
dominance of X over Y, in mathematical terms).
References
Bod, R. (2010) De vergeten wetenschappen: Een geschiedenis van de humaniora. [The
forgotten sciences: A history of the humanities] (Amsterdam: Bert Bakker).
Dulleck, U. and Kerschbamer, R. (2006) ‘On Doctors, Mechanics, and Computer
Specialists: The Economics of Credence Goods’, Journal of Economic Literature,
44(1), 5–42.
Elton, L. (2004) ‘Goodhart’s Law and Performance Indicators in Higher
Education’, Evaluation and Research in Education, 18(1–2), 120–28.
Espeland, W.N. and Sauder, M. (2007) ‘Rankings and Reactivity: How Public
Measures Recreate Social Worlds’, American Journal of Sociology, 113(1), 1–40.
Gaberscik, G. (2010) ‘Überlegungen zum Thema Qualität in Lehre und Studium
sowie Forschung und Technologie’, Qualität in der Wissenschaft, 4(2), 37–47.
Hazelkorn, E. (2011) Rankings and the Reshaping of Higher Education: The Battle for
World-Class Excellence (London: Palgrave Macmillan).
Marginson, S. (2006) Global University Rankings: Private and Public Goods. Paper
presented at the 19th Annual CHER conference, 7–9 September, Kassel.
McCormick, A. and Zhao, C.-M. (2005) ‘Rethinking and Reframing the Carnegie
Classification’, Change (September/October), 51–57.
Neave, G. (1995) ‘Homogenization, integration and convergence: The Cheshire
cats of higher education analysis’, in V. Lynn Meek, L. Goedegebuure,
Transparency about Multidimensional Activities 65
O. Kivinen and R. Rinne (eds), The Mockers and Mocked: Comparative Perspectives
on Differentiation, Convergence and Diversity in Higher Education (Oxford:
Pergamon), pp. 26–41.
Van den Broek, A., de Jong, R., Hampsink, S. and Sand, A. (2006) Topkwaliteit
in het hoger onderwijs: Een verkennend onderzoek naar kenmerken van topkwaliteit
in het hoger onderwijs (The Hague: Ministerie van Onderwijs, Cultuur en
Wetenschap).
Van der Wende, M. and Westerheijden, D.F. (2009) ‘Rankings and Classifications:
The Need for a Multidimensional Approach’, in F. van Vught (ed.), Mapping the
Higher Education Landscape: Towards a European Classification of Higher Education
(Dordrecht: Springer), pp. 71–86.
Van Vught, Frans A. (2008) ‘Mission diversity and reputation in higher educa-
tion’, Higher Education Policy, 21(2), pp. 151–174.
Van Vught, F.A. and Ziegele, F. (eds) (2012) Multidimensional Ranking: The Design
and Development of U-Multirank (Dordrecht: Springer).
Van Vught, F., File, J., Kaiser, F., Jongbloed, B. and Faber, M. (2011) U-Map:
A University Profiling Tool – 2011 Update Report (Enschede: CHEPS, University
of Twente).
Vlasceanu, L. and Barrows, L. (eds) (2004) Indicators for Institutional and Programme
Accreditation in Higher/Tertiary Education (Bucharest: UNESCO-CEPES).
Westerheijden, D.F. (2005) ‘Pieken op de kaart: Excellente opleidingen zichtbaar
maken. Een haalbaarheidsonderzoek’, IHEM Thematische Rapporten (Enschede:
CHEPS).
5
Assessment of Higher Education
Learning Outcomes (AHELO):
An OECD Feasibility Study
Diana Dias and Alberto Amaral
Introduction
66
(AHELO): An OECD Feasibility Study 67
Martens and colleagues (2004) suggest that the OECD has acquired a
strong capacity for coordination by means of organising procedures
and handling the treatment of their outcome, which in turn shapes the
initiatives and options that may be entertained in a particular field
of policy (Amaral and Neave, ibid.). Henry et al. (2001) assert that the
OECD, through its work on educational indicators, has gained ‘a climate
of support among policy-makers and analysts across member countries
and even beyond’ (Henry et al., 2001, p. 88). This ability of the OECD
to shape expert opinion, without having developed a strong governance
instrumentality – it lacks both financial clout and legislative capacity
(Amaral and Neave, ibid.) – is strongly supported by cross national and
comparative reports and educational statistics and indicators, such as
Education at a Glance and the Performance Indicators of School Achievement
(PISA). Indeed, it is well known that much of the power exhibited by
the OECD in acting as a powerful agent in the convergence of higher
education national policies has to do with its technical capacity, namely
its capacity to provide reliable education statistics (Amaral and Neave,
2009) using very sophisticated quantitative tools, such as:
This power has been clearly reinforced by the success of successive PISA
exercises at the level of primary and secondary education. The Program
for International Student Assessment (PISA) is a standardised OECD
test given to 15-year-olds in OECD countries in order to judge the
effectiveness of the school system by assessing the learning outcomes
of students.
More recently, the OECD has decided to extend its influence over
higher education by creating a new PISA – the AHELO project – for
this very specific sector of education. The 2006 OECD Ministerial
Conference in Athens, concerning quality, equity and efficiency, offered
a golden opportunity that the OECD eagerly seized to strengthen its
influence. In Athens, the Ministers discussed at length how to move
from quantity to quality of higher education, and the OECD Secretary-
General offered the assistance of the organisation in developing new
measures of learning outcomes in higher education, drawing upon its
experience with the PISA survey. In the summary of the Conference
(AHELO): An OECD Feasibility Study 73
academic staff. A further concern was that a PISA for higher education
could easily be transformed into a simplistic ranking or league table of
institutions.
Our past experience (see Martens and Wolff, 2009) shows that once
open, Pandora’s box is quite difficult to close, even when powerful
governments are involved. Despite technically well grounded negative
opinions, these international organisations will always push forward
the implementation of their ‘star’ projects. Therefore it was left to hope
that some of OECD’s soothing declaration would come true:
AHELO is not a university ranking like the Shanghai Jiao Tong, the
Times Higher Education or any number of others. The designers of
AHELO reject the idea that higher education can be reduced to a
handful of criteria, which leaves out more than it includes. Instead,
AHELO sets out to identify and measure as many factors as possible
influencing higher education, with the emphasis being always on
teaching and learning. (OECD, 2009a)
The initial design of the feasibility study of the AHELO programme con-
sisted of four ‘strands’: three assessments to measure learning outcomes
in terms of generic skills and discipline-related skills (in engineering
and economics) and a fourth value-added strand, research based. The
measurement of generic skills (for example, analytical reasoning, criti-
cal thinking, problem-solving, the practical application of theory, ease
in written communication, leadership ability, the ability to work in a
group and so on) was based on an adaptation of the Collegiate Learning
Assessment (CLA) developed in the United States. For the discipline-
based strands, the feasibility study was focused on disciplines with less
variable study outcomes across countries and cultures, such as medi-
cine, the sciences or economics, building on the approach used in the
Tuning Process for Engineering and Economics.
The value-added strand was not supposed to be measured, as this
would not be compatible with the timeframe of the feasibility study.
Therefore, it was decided that ‘the feasibility study will only explore
different methodologies, concepts and tools to identify promising
ways of measuring the value-added component of education’ (OECD,
2009a, p. 10). Actually, some stakeholders considered that quality in
higher education institutions was closely related with the ‘upgrade’ in
student learning, as a good indicator of school effectiveness. That is,
the students’ learning improvement was crucial to understanding the
contribution of higher education institutions to student learning. Thus,
not only students learning outcomes should be measured at the end of
the students’ studies, but also growth in learning so as to portray the net
contribution of the institutions to student learning – or the value added.
The OECD was also aware of the importance of context, although it
also recognised the difficulty of context measurement. The feasibility
study aimed to define the limits of a contextual inquiry and divided
context into four topical areas: physical and organisational charac-
teristics, education-related behaviours and practices, psycho-social
and cultural attributes, and behavioural and cultural attributes. In the
proposed model, student learning outcomes ‘are a joint product of
input conditions and the environment within which learning takes
place’ (OECD, 2009b, p. 4):
Implementation
The feasibility study aimed to test the scientific and practical feasibility
of designing instruments for the assessment of learning outcomes across
diverse national, cultural, linguistic and institutional contexts. The pur-
pose of the feasibility study was to see whether it was practically and
scientifically feasible to assess what students in higher education know
and can do upon graduation within and across these diverse contexts
and if tests can indeed be developed and administered to students. The
feasibility study should demonstrate what is feasible and what could
be feasible, what has worked well and what has not, and should provide
lessons and stimulate reflection on how learning outcomes might be
most effectively measured in the future. In fact, the feasibility study
was designed to explore how learning outcomes could be measured
internationally, providing actual data on the quality of learning and its
relevance to the labour market and their results should be comparable
internationally regardless of language or cultural background.
The implementation of the test was a large-scale exercise with the par-
ticipation of a total of 248 higher education institutions in 17 countries
from different regions of the globe, and the instruments were adminis-
tered to almost 4,900 faculty and 23,000 students chosen from among
those near the end of a Bachelor’s degree. The OECD worked with a
consortium of world experts and teams in the participating countries
to develop and administer the tests. A Technical Advisory Group (TAG)
composed of eight international experts and chaired by Peter Ewell was
responsible for providing advice on matters such as instrument devel-
opment, translation and adaptation procedures, validation activities,
scoring and verification procedures and feasibility evaluations. The TAG
was also asked to review and provide feedback on documents when so
requested.
The feasibility study was implemented in three phases. The first phase
involved the development of the provisional assessment frameworks
and testing instruments appropriate for an international context for
each strand of work (generic skills, economics and engineering), and
(AHELO): An OECD Feasibility Study 79
Conclusions
The Board also argued that different countries have different motiva-
tions to engage and are at different stages of development, which results
in different levels of engagement. The report has not provided accurate
data on the costs and benefits of participating in a full-scale exercise –
it can only be estimated from the experience of the feasibility study
that costs will be substantial. At last there was a further short discus-
sion on the issue of low-stakes versus high-stakes approaches. There
was a strong sense that it would not be possible to pursue a low-stakes
approach, which could be contained as such. Inevitably it was seen
that it would become or be used for high-stakes purposes, especially
rankings. Therefore, there was no doubt in the strongly negative overall
sentiment of the Governing Board against the low-stakes approach that
was suggested by the Education Policy Committee (EDPC) in its roadmap
for AHELO longer term development.
The considerations of the Governing Board of the IMHE were con-
veyed to the EDPC but so far no reactions are known. The publication of
the third volume of the AHELO feasibility report, including the results
of the March Conference, was postponed to September 2013. It is prob-
able that a full-scale AHELO will not be possible in the near future, at
least not before the complete analysis of the results of the feasibility
study are completed, the financial situation is fully clarified and an
agreement on the purposes of AHELO is reached.
(AHELO): An OECD Feasibility Study 85
Note
1. A low-stakes approach means the results of the exercise will have conse-
quences for those students and institutions participating in it. This implies
that governments would not receive data in a form that would allow them to
identify the results by higher education institution.
References
Adams, S. (2008) Learning Outcomes Current Developments in Europe: Update on the
Issues and Applications of Learning Outcomes Associated with the Bologna Process.
General conference presented in the Bologna seminar on learning outcomes
based in higher education: The Scottish experiences, 21–22 February 2008, at
Heriot-Watt University, Edinburgh, Scotland.
Adams, S. (2006) ‘An introduction to learning outcomes: A consideration of
the nature, function and position of learning outcomes in the creation of the
European Higher Education Area’, in E. Froment (ed.), EUA Bologna Handbook:
Making Bologna Work, Volume 4 (Berlin: RAABE).
Almeida, L.S. (2002) ‘Facilitar a aprendizagem: ajudar os alunos a aprender e a
pensar’, Psicologia Escolar e Educacional, 6, 155–165.
Amaral, A. and Neave, G. (2009) ‘The OECD and Its Influence in Higher
Education: A Critical Review’, in R.M. Bassett and A. Maldonado-Maldonado
(eds), International Organizations and Higher Education Policy. Thinking Globally,
Acting Locally? (New York and London: Routledge), pp. 82–98.
Bennett, D. (2001) ‘Assessing Quality in Higher Education’, Liberal Education,
87(2), 1–4.
Bergen Communiqué (2005) The European Higher Education Area: Achieving the
Goals. http://www.ond.vlaanderen.be/hogeronderwijs/bologna/documents/
MDC/050520_Bergen_Communique1.pdf (accessed 22 September 2013).
Berlin Communiqué (2003) http://www.ond.vlaanderen.be/hogeronderwijs/
bologna/documents/MDC/Berlin_Communique1.pdf (accessed 22 September
2013).
Bloom, B.S. (ed.) (1956) Taxonomy of Educational Objectives: The Classification of
Educational Goals, Handbook I: Cognitive Domain (New York: McKay).
Bologna Declaration (1999) http://ec.europa.eu/education/policies/educ/bologna/
bologna.pdf (accessed 22 September 2013).
Bologna Seminar (2008) http://www.ehea.info/Uploads/Seminars/BS_P_
Report_20080915_FINAL.pdf (accessed 22 September 2013).
Education International (2007) Assessing Higher Education Outcomes: A ‘PISA’
for Higher Education? November 2007, http://download.ei-ie.org/docs/
IRISDocuments/Education/Higher%20Education%20and%20Research/
Higher%20Education%20Policy%20Papers/2008-00036-01-E.pdf (accessed
22 September 2013).
ENQA (2005) Standards and Guidelines for Quality Assurance in the European Higher
Education Area (Helsinki: ENQA).
European Commission (2009) ECTS Users’ Guide (Luxembourg: Office for Official
Publications of the European Commission).
86 Quality Assurance in Higher Education
Ewell, P. and Miller, M.A. (2005) Measuring up on College-level Learning (San Jose,
CA: National Center for Public Policy in Higher Education).
Ewell, P. (2013) ‘Role of the AHELO Feasibility Study Technical Advisory Group
(TAG)’, Assessment of Higher Education Learning Outcomes, AHELO Feasibility
Study Report, Volume 2: Data Analysis and National Experiences (Paris: OECD),
pp. 152–71.
Harvey, L. and Newton, J. (2006) ‘Transforming quality evaluation: moving
on’, in D. Westerheijden, B. Stensaker and M.J. Rosa (eds), Quality Assurance
in Higher Education: Trends in Regulation, Translation and Transformation
(Dordrecht: Springer), pp. 225–45.
Henry, M., Lingard, B., Rizvi, F. and Taylor, S. (2001) The OECD, Globalisation and
Education Policy (Oxford: Pergamon and IAU Press).
Kennedy, D., Hyland, A. and Ryan, N. (2006) ‘Writing and using Learning
Outcomes’ in Bologna Handbook, Implementing Bologna in Your Institution, C3.4-
1, pp. 1–30.
Khu, G.D. (2008) High-Impact Educational Practices: What They Are, Who Has
Access to Them, and Why They Matter (Washington, DC: Association of
American Colleges and Universities).
Leuven Communiqué (2009) The Bologna Process 2020: The European
Higher Education Area in the New Decade. http://www.ond.vlaanderen.be/
hogeronderwijs/bologna/conference/documents/Leuven_Louvain-la-Neuve_
Communiqué_April_2009.pdf (accessed 22 September 2013).
London Communiqué (2007) Towards the European Higher Education Area: Responding
to Challenges in a Globalised World. http://www.ond.vlaanderen.be/hogeronder-
wijs/bologna/documents/MDC/London_Communique18May2007.pdf
(accessed 22 September 2013).
Martens, K., Balzer, C., Sackmann, R. and Weymann, A. (2004) ‘Comparing
Governance of International Organisations – The EU, the OECD and
Educational Policy’, TransState Working Papers No.7, Sfb597 ‘Staatlichkeit im
Wandel – (Transformations of the State)’, Bremen.
Martens, K. and Wolff, K.D. (2009) ‘Boomerangs and Trojan Horses: The
Unintended Consequences of internationalizing Education Policy through the
EU and the OECD’, in A. Amaral, G. Neave, P., C. Musselin and P.A.M. Maassen
(eds), European Integration and Governance of Higher Education and Research
(Dordrecht: Springer), pp. 81–107.
Nushe, D. (2008) Assessment of Learning Outcomes in Higher Education:
A Comparative Review of selected Practices, OECD Education Working Paper
No. 15, (Paris: OECD).
OECD Ministerial Conference (2006) ‘Summary’, http://www.oecd.org/greece/
summarybythegreekministerofnationaleducationandreligiousaffairsmariet-
tagiannakouaschairofthemeetingofoecdeducationministers.htm (accessed 22
September 2013).
OECD (2008) Measuring Improvements in Learning Outcomes (Paris: OECD
Publishing).
OECD (2009a) Assessment of Higher Education Learning Outcomes, leaflet (Paris:
OECD).
OECD (2009b) Analytical Framework for the Contextual Dimension of the AHELO
Feasibility Study (Paris: OECD).
(AHELO): An OECD Feasibility Study 87
Introduction
88
Risk, Trust and Accountability 89
Risk-based regulation
factors that might indicate that the provision offered by that institution
could be placed at risk in the future. These factors relate to the char-
acteristics of the provider or of its provision, and not to the external
‘systemic’ risks for which the regulators and their political masters
themselves bear some responsibility (FSA, 2009, p. 92). Closer scrutiny,
not remedial or supportive action, is the only intervention that is
contemplated as a consequence of an institution or its provision being
found to be at risk.5
explain that ‘when used well (risk management) can actively encourage
an institution to take on activities that have a higher level of risk
because the risks have been identified and are being well managed’
(HEFCE, 2011, para 16).
The definition is notable also for the way in which its use of the
future tense suggests that the identification and assessment of risk is an
act of prediction, inviting an analysis of the possibly causal relationship
between a condition or an event (the risk) and adverse or beneficial
outcome. Rather than adopting a definition which implies a helpless
surrender to the ‘insecurities, uncertainties and loss of boundaries’ of
Ulrich Beck’s ‘risk regime’, risk management necessarily rests on the
premise that ‘“something can be done” to prevent misfortune’ (Beck,
1992, p. 70; Lupton, 1999, p. 3). In this sense, the risk manager sub-
scribes unashamedly to the positivist dictum, savoir pour prévoir, afin de
pouvoir.
The White Paper and the HEFCE consultation documents tend to
equate ‘risk’ with outcome: a high performing institution is ‘low risk’,
and one with little or no ‘track record’ is ‘high risk’. However, the FSA
and other organisations distinguish between the outcome or detriment
and the ‘risk’, which is something that has ‘the potential to cause harm’
(FSA, 2006). As a Royal Society study group put it, ‘detriment is a mea-
sure of the expected harm or loss associated with (the) adverse event’.7
Assuming that QAA defines risk in same way as the White Paper and
the Funding Council, the identification of an institution as high risk
would result in post hoc action to rehabilitate or perhaps penalise an
institution. The implication of a risk management approach, on the
other hand, is that intervention following an assessment of risk should
include preventative measures to avert the reputational damage or
under-performance that might otherwise occur if no action were to be
taken.
Recognising that risk derives from both the properties of an institution
and its external environment, I have distinguished elsewhere between
‘risk potential’ and ‘risk factors’. Risk factors, or the previously men-
tioned ‘systemic risks’, are the many and various events that could occur
in the future, and the identification and predictability of these events
is contingent on our knowledge of the particular environments within
which we operate. Such events simply happen, but whether and how
they impact on an institution is a function of ‘risk potential’. This term
describes certain conditions or qualities – strengths and weaknesses –
that are inherent in an institution and its provision. Risk potential is the
institution’s capacity to exploit opportunities or to counter the threats
Risk, Trust and Accountability 97
Notes
1. Throughout this chapter I have used ‘regulation’ when referring to the
responsibilities of the Quality Assurance Agency, and I have reserved the
term ‘quality assurance’ to refer to the internal arrangements of universities
in securing the quality and standards of their programmes.
2. The reference here is to arrangements for the audit of institutions in England
and Northern Ireland.
3. For a discussion of this debate, see C. Raban and E. Turner (2005), pp. 26–9.
4. Because British universities are legally autonomous and hold their degree
awarding powers by Statute or by Royal Charter, any suggestion that they
should be subject to a system of accreditation would be very controversial.
Equally contentious was the suggestion that QAA should be responsible
for the assurance of academic standards since this had hitherto had been
regarded as the inalienable responsibility of universities themselves.
5. For further discussion of the points made in this paragraph see Raban (2011).
6. This, indeed, was a tacit expectation when QAA audit teams made a judge-
ment of confidence ‘in the soundness of [an institution’s] present and
Risk, Trust and Accountability 103
likely future management of the academic standards of its awards [and of]
the quality of learning opportunities available to students’ (QAA, 2009).
Confidence judgements are not a feature of the current review method for
England and Northern Ireland.
7. Royal Society for the Prevention of Accidents (1983) Risk assessment: a study
group report (London: Royal Societies). Quoted in Adams (1995), p. 8.
8. The concept of risk potential is also discussed in C. Raban (2008).
9. For further discussion of the approaches described in this and the following
paragraphs, see C. Raban and E. Turner (2006) and (2005).
10. See also the Treasury Committee Written Evidence, Part 3, February 2009.
References
Adams, J. (1995) Risk (London: Routledge).
Beck, U. (1992) Risk Society: Towards a New Modernity (London: Sage).
Better Regulation Task Force (2002) Higher Education: Easing the Burden, Cabinet
Office, July.
BIS (2011) Higher Education: Students at the Heart of the System (London:
Department for Business, Innovation and Skills).
Browne, J. (2010) Securing a Sustainable Future for Higher Education, http://dera.ioe.
ac.uk/11444/1/10-1208-securing-sustainable-higher-education-browne-report.
pdf (accessed 22 September 2013).
Collini, C. (2010) ‘Browne’s gamble’, London Review of Books, 32(21), 23–25.
Cabinet Office (2003) Lambert Review of Business-University Collaboration (London:
HMSO).
EUA (2005) Developing an Internal Quality Culture in European Universities (Brussels:
European Universities Association).
EUA (2009) Improving Quality, Enhancing Creativity. Final report of the QAHECA
project (Brussels: European Universities Association).
FSA (2006) The FSA’s Risk-Assessment Framework (London: Financial Services
Authority).
FSA (2009) The Turner Review: A Regulatory Response to the Banking Crisis
(London: Financial Services Authority), http://www.fsa.gov.uk/pubs/other/
turner_review.pdf (accessed 22 September 2013)
Greaves, P. (2002, September) Address to the Higher Education Forum (London:
HEFCE).
Halsey, A.H. (1995) Decline of Donnish Dominion: The British Academic Profession
in the Twentieth Century (Oxford: Oxford University Press).
Hansard (2001) House of Lords Debate on Universities, Vol. 623 cc 1467–98,
21 March. (House of Commons).
HEFCE (2000) Accounts Direction to Higher Education Institutions, Circular letter
number 24/00: Bristol, November.
HEFCE (2001) Risk Management: A Good Practice Guide for Higher Education
Institutions (May 01/28), (Bristol: HEFCE).
HEFCE (2005) Risk Management in Higher Education. A guide to good prac-
tice prepared for HEFCE by PricewaterhouseCoopers, http://dera.ioe.
ac.uk/5600/1/05_11.pdf (accessed 22 September 2013).
104 Quality Assurance in Higher Education
Thomas, G. and Peim, N. (2011) ‘In ourselves we trust’, Times Higher Education,
14 July.
Times Higher Education (2012) ‘Elite embittered as HEFCE decides not to risk
calling time on audits’, 1 November.
Trow, M. (1996) Trust, markets and accountability in higher education: a com-
parative perspective, Higher Education Policy, 9(4), 309–24.
Universities UK (2011) Higher Education in Facts and Figures, Summer, London:
Universities UK.
University and College Union (2012) 2012 Occupational Stress Survey, http://www.
ucu.org.uk/media/pdf/l/n/ucu_occstress12_hedemands_full (accessed 6.12.13).
Walker, D. (2009) A Review of Corporate Governance in UK Banks and Other Financial
Industry Entities (London: HM Treasury).
Willetts, D. (2011) Address to the Universities UK Spring Conference, February.
Wright, S. (2002) ‘Enhancing the quality of teaching in universities: through
coercive managerialism or organisational democracy?’ in Jary, D. (ed.),
Benchmarking and Quality Management (Birmingham: C-SAP Publications),
pp. 115–42.
7
Risk Management: Implementation
Anthony McClaran
106
Risk Management: Implementation 107
UK policy background
The Technical Consultation explicitly set out the policy objectives for
the proposed move to a more risk-based approach:
Sector consultation
Following the government’s response, the steps to implementation
moved forward with a detailed consultation with the UK higher educa-
tion sector in the summer of 2012, led by HEFCE, the Higher Education
Funding Council for England. QAA acted as expert adviser to HEFCE
Risk Management: Implementation 109
(i) How would higher education providers’ engagement with the quality
assurance system vary in nature, frequency and/or intensity,
depending on their track record on quality assurance and the profile
of their provision?
(ii) How would higher education providers undergo review? For
instance, might a model be adopted of a ‘core’ institutional review
and additional institutional review ‘modules’ (on collaborative
provision, for example, if offered in their portfolios)?
110 Quality Assurance in Higher Education
Date Activity
(i) Developing a new operating framework that will set out the roles,
responsibilities, relationships and accountabilities of the various organ-
isations involved in the regulation of higher education in England.
(ii) Developing the successor to HEFCE’s Financial Memorandum,
which will reflect the changing landscape of higher education
funding, and the accountabilities of higher education providers.
(iii) Redesigning the data and information landscape – a project to
enhance the arrangements for the collection, sharing and dissemi-
nation of data and information about the higher education system.
(iv) Investigating constitutions and corporate forms – an analysis of
the changing corporate forms of higher education providers and
the implications of this for the interests of students and the wider
public, and the reputation of the UK’s higher education system.
Risk Management: Implementation 113
Afterword
This chapter was presented at the A3ES & CIPES conference, ‘Recent
Trends in Quality Assurance’ in Porto, Portugal on 12 October 2012.
Subsequently, in late October 2012, HEFCE published the outcomes
of its consultation with the higher education sector in A Risk-Based
Approach to Quality Assurance: outcomes of consultation and next steps
(Higher Education Funding Council for England, 2012b).
There were 130 responses to the consultation, which showed wide
support on a range of key issues. In particular, respondents supported
the proposal to build on the existing method of institutional review
as the basis for a more risk-based approach to quality assurance, with
its clearer judgements, focus on risk and reduced bureaucratic burden
compared with previous methods. Alongside this was an emphasis from
respondents on ensuring that enhancement remains a core dimension
of English quality assurance, and continuing to involve students fully
in the quality assurance process as partners in assessing and improving
the quality of their own higher education.
There was also broad support for reducing unnecessary burden and
achieving better regulation by targeting QAA’s efforts where they are
most needed, and for increasing transparency about reviews and the
rolling review programme. Respondents also welcomed the proposal to
tailor external review to the individual circumstances of providers (as
opposed to a ‘one size fits all’ approach).
In summary, this would be a transparent, proportionate and more
risk-based approach to quality assurance that ensures that the interests
of students continue to be promoted and protected.
Some of the main outcomes of the consultation were:
(ii) The HEFCE report also proposes greater transparency through the
publication of a rolling programme of reviews on the QAA website.
This would clearly indicate when a provider’s next review is due to
take place.
(iii) Reviews will be more tailored to suit the circumstances of indivi-
dual providers (for instance, by adjusting the frequency, nature
and intensity of reviews). This will enable QAA to focus efforts
where they will have the most impact.
(iv) Under the new approach, there will be a single review visit and no
separate reviews of different types of provision at a single institu-
tion. For instance, there will no longer be a separate review of
collaborative provision. QAA will tailor the review to the institu-
tion’s provision, varying the number of days of the review visit and
number of reviewers as appropriate.
(v) There will be an end to mid-cycle review. Quality and standards
will be effectively safeguarded between reviews through QAA’s
Concerns Scheme and QAA will focus on further raising awareness
of the scheme, in particular through student organisations. In addi-
tion, there are other mechanisms (for example, action plans) which
follow up any action required by an institution after a review.
(vi) Students will continue to be at the heart of the process, in part by
keeping the review cycle to a maximum of six years, enabling their
input to be considered at least as frequently as it is in the current
cycle. The Student Written Submission will also continue to be a
central part of the review process. QAA will also continue to pro-
mote the role of students in quality assurance and enhancement
activities – in addition to its wider work of student engagement.
(vii) Many respondents supported streamlining the review activity of
QAA and professional, statutory and regulatory bodies (PSRBs).
However, it was acknowledged that PSRBs’ review processes and
those of the QAA do not produce comparable information, as
PSRBs focus on subject-level accreditation whilst QAA focuses on
institution-wide management of standards and quality. There is
also a lower level of student engagement with PSRB accreditation.
HEFCE has asked QAA to make further progress in this area, in
particular, through the further development of individual agree-
ments with PSRBs.
2013–14. It has been confirmed that this will also operate in Northern
Ireland. Higher Education Review will succeed two existing methods:
Institutional Review in England and Northern Ireland (IRENI) and
Review of College Higher Education (RCHE).
The overall aim of Higher Education Review remains to inform students
and the wider public whether a provider meets the expectations of the
higher education sector for:
References
BIS, Department for Business, Innovation & Skills (2011a) Students at the Heart
of the System (London: The Stationery Office). http://bis.gov.uk/assets/biscore/
higher-education/docs/h/11-944-higher-education-students-at-heart-of-
system.pdf (accessed 22 September 2013).
BIS, Department for Business, Innovation & Skills (2011b) A Fit-for-Purpose
Regulatory Framework for Higher Education: Technical Consultation (London:
The Stationery Office). http://www.bis.gov.uk/assets/biscore/higher-education/
docs/n/11-1114-new-regulatory-framework-for-higher-education-consulta-
tion.pdf (accessed 22 September 2013).
BIS, Department for Business, Innovation & Skills (2012) Government Response
to Consultations on: Students at the Heart of the System: A New Fit-for-Purpose
Regulatory Framework for Higher Education (London: The Stationery Office).
http://www.bis.gov.uk/assets/biscore/higher-education/docs/g/12-890-gov-
ernment-response-students-and-regulatory-framework-higher-education.pdf
(accessed 22 September 2013).
Higher Education Funding Council for England (HEFCE) (2012a) A Risk-Based
Approach to Quality Assurance: Consultation. http://www.hefce.ac.uk/pubs/
year/2012/201211/ (accessed 22 September 2013).
Higher Education Funding Council for England (HEFCE) (2012b) A Risk-Based
Approach to Quality Assurance: Outcomes of Consultation and Next Steps. http://
www.hefce.ac.uk/pubs/year/2012/201227/ (accessed 22 September 2013).
QAA (2013) QAA consultation on Higher Education Review. http://www.
qaa.ac.uk/Newsroom/Consultations/Pages/Higher-Education-Review.aspx
(accessed 22 September 2013).
8
Quality Enhancement:
An Overview of Lessons
from the Scottish Experience
Murray Saunders
Introduction
Distinctiveness
117
118 Quality Assurance in Higher Education
The QEF aspired to make a clear break with the emphasis of previous
(assurance-based) quality approaches within the Scottish system and
still prevalent in other parts of the UK, and associated, in the eyes of
the HE sector at least, with the role of the Quality Assurance Agency
(QAA1). It would be a mistake, however, to imply an oppositional
Quality Enhancement 119
The Scottish approach had traces of all three dimensions across and
within the institutions, thus the evaluations of the mechanisms suggest
they have had uneven effects. Overall, however, the combination of a
more developmental approach to institutional review, greater student
involvement, a focus on teaching and learning themes and responsive-
ness to feedback and experience has resulted in a step-change in the
way quality processes are understood and practised within the sector.
However, the significance of the step-change differs according to the
stake-holding group, as this overview of the evaluation of the policy
will show. Despite this caveat, and given the traditional and sometimes
fierce resistance to central initiatives in higher education within the UK,
particularly in the teaching and learning domain, the trajectory of the
QEF has broad legitimacy in the sector as a whole.
In terms of the critical differences between an enhancement- as
opposed to an assurance-led approach to quality processes, we have in
the Scottish case an interesting attempt to integrate legitimate sectorial
concerns with standards and cross institutional comparisons (via the
periodic external reviews ELIR)2 and the initiation of processes designed
to provide frameworks for action and resources for improvement and
development. It is the integrative approach, with an emphasis on deve-
lopment, which sets the case apart.
120 Quality Assurance in Higher Education
The evaluative research was conducted in two waves, the first from 2003
to 2006 and the second from 2007 to 2011. The focus was collabora-
tively derived with the SFC and involved eight national quantitative
surveys (students, student representatives, institutional student repre-
sentatives, middle managers, managers with a ‘quality brief’ and front
line lecturers). The database also included two waves of in-depth case
studies, structured interviews (of approximately 800 individuals includ-
ing national key informants) and the analysis of secondary data from all
20 universities in Scotland. The output from the research took the form
of eight reports to the SFC.3
When we look at the strategies the QEF has embodied, we can see
that there have been some ‘change theory’ themes running through the
approach. These themes are based on an understanding of the higher
education sector in Scotland, of the kind of ‘entity’ it was, and how
it might respond to the thrust of the broad approach to quality being
promoted.
Most importantly, unlike many policies or programmes, the QEF in
Scotland has had a built-in implementation reality that set it apart from
its international neighbours. The policy was an interesting hybrid of ideas
of and from the sector itself, from analogous experience elsewhere and a
good knowledge of the current research and evaluative literature. That is
to say, it drew on ideas and influences from far and wide, but there were
strong local influences that gave it a distinctive ‘Scottishness’. This means
that in any turf war over legitimation or credibility, the promoters of
policy could (and have) drawn attention to the fact that the main archi-
tects were from the sector itself. This is not to say that there is such a
thing as a homogeneous higher education ‘sector’ in Scotland; there are
many and – as in any national university sector – rather contrasting expe-
riences and priorities, but the aspirations and interests in the approach
were, in an important sense, known and shared.
The most obvious strategy or change theory theme within the QEF
was to use existing expertise in Scotland, informed by international
experience, to create a Scottish solution. This characteristic has been an
important part of the ‘uniqueness’ of the QEF as a policy ‘instrument’
and has been a core dimension of the way in which the approach has
been ‘home-grown’, managed and developed. This enabled a familiarity,
an ownership and a legitimation that other forms of implementation
strategy might find hard to emulate. We term this a theory of ‘consen-
sual development’.
From the start of the QEF there was an awareness that disgruntle-
ment with quality assurance processes, which was quite common in
the UK (see Saunders, 2009, p. 93), and the wish to do something dif-
ferent, was no guarantee that a feasible and better approach could be
created. However, in Scotland there was the priceless advantage that the
self-governing system comprised just 20 higher education institutions.
This made it possible to assemble a distinctively Scottish alternative to
current quality assurance practices. Since control of higher education
was located with the Scottish Assembly (now the Scottish Government)
and since there was considerable interest amongst officials and agencies
in the creation of a distinctively Scottish approach to quality, the scene
was set for new thinking.
122 Quality Assurance in Higher Education
QEF brought to the fore the simple and powerful idea that the
purpose of quality systems in higher education is to improve student
experiences and, consequently, their learning. Distinctively, the QEF
has had a commitment to:
Shared visions
Futures
Notes
1. The QAA asserts on its website that it is ‘our job is to safeguard quality and
standards in UK universities and colleges, so that students have the best pos-
sible learning experience’ http://www.qaa.ac.uk/Pages/default.aspx.
2. The indicators for which have been derived consensually by Scottish
Universities (through the Scottish Higher Education Enhancement Committee)
via a partnership with the QAA (Quality Assurance Agency) Scotland.
3. See http://www.sfc.ac.uk/reports_publications/reports_publications.aspx?
Search=QEF%20evaluation&Type=Reports%20and%20publications&
Sector=-1&From=dd/mm/yyyy&To=dd/mm/yyyy.
References
Fisher, G. (2009) ‘Exchange and Art: interdisciplinary learning’, in V. Bamber,
P. Trowler, M. Saunders and P. Knight (eds), Enhancing Learning, Teaching,
Quality Enhancement 131
Introduction
While external quality assurance (EQA) can be seen as one of the most
visible results of European integration through the Bologna process in
the last few decades, new developments might question whether the
field of quality assurance is actually driven by the search for conver-
gence at the European level. This chapter identifies some current trends
in EQA, indicates possible implications, and discusses whether EQA is
at a critical stage in its developmental phase.
In the last few decades EQA has fulfilled various functions in higher
education (see Westerheijden et al., 1994). It has played an important
role in guarding quality when new providers enter higher education,
has provided useful information about quality to different stakehold-
ers in the sector, including governments and students, and not least
has played an important role in stimulating quality improvement in
education and training in general (Brennan and Shah, 2000). Of course,
the function of EQA in various European regions and countries differs
considerably (Rosa et al., 2006). In some, EQA has played an impor-
tant role as a regulative tool ensuring quality in deregulated and more
market-driven systems (Dill and Beerkens, 2010). In other regions and
countries, perhaps where institutions have already established their
own systems of quality assurance, EQA has played a role more related
to the development of these systems.
At the European level, a key ambition has still been that EQA should be
conducted in a way that would make regional and national differences
of lesser importance, and where the degree of convergence between
different EQA systems was sufficient to foster trust and mutual recogni-
tion within the European higher education area (Westerheijden, 2001).
135
136 Quality Assurance in Higher Education
developing of the ESG, for paving the way for the establishment of the
establishment of systems for EQA in all European countries, and for the
establishment of new agencies with a specific responsibility for run-
ning such systems. With the later establishment of EQAR – a register
for quality assurance agencies operating in Europe – one could argue
that the European higher education sector is becoming more similar
to other sectors in society that have experienced increasing regulatory
attempts from the European level (see, for example, Levi-Faur, 2011). As
part of this development, one could argue that national governments
have lost power and influence domestically as new agencies have had
European standards to subscribe to and where they, as a consequence,
have become more autonomous from the governments that created
them. One could also argue that national governments have lost power
at a European level as increasing professionalisation and bureaucratisa-
tion of the whole field of EQA (see, for example, Stensaker, 2003), has
driven the ‘politics’ out of quality assurance discussions for the sake of
routines, checks and balances. The question is whether this trend will
continue, or whether we are witnessing signs of a changing context
surrounding EQA within Europe.
QA in an efficiency perspective
For national governments EQA is also an issue that can be analysed
from an efficiency perspective. Any governmental measures imply using
resources, time and energy to deal with various political issues (Dill and
Beerkens, 2010). From a governmental point of view the costs associated
with the development of new instruments must – perhaps especially
when resources are scarce – be related to the potential benefits the
instruments create. From this perspective, the relevant questions are
European Trends in Quality Assurance 139
QA in an accountability perspective
National governments do have interests beyond solving effectiveness
and efficiency issues, especially when such issues are tricky to solve,
and where there is a need for the government to demonstrate that it is
still on top of the situation (Fisher, 2004). Hence, in the audit society it
is not only those that are exposed to reform that must be accountable;
the same also goes for governments (Stensaker and Harvey, 2011). From
this perspective, reform can in itself be seen as a form of accountability.
For example, governmental changes in EQA could be caused by the
need to copy the practices and systems that are seen as innovative or
popular – regardless of whether these are effective or efficient (see also
Westerheijden et al., 2014). Change becomes a sign that something is
being done, and that those in power are taking their responsibilities
seriously.
The three perspectives are not mutually exclusive. One can imagine
combinations of policy initiatives in EQA that may have effects on
effectiveness, efficiency and even in improving accountability at the
same time. What the three perspectives do have in common is a strong
link to change, and that absence of change would suggest that none of
the three perspectives are very relevant to explaining the development
of EQA in Europe. Here, we should now turn to the current realities sur-
rounding EQA in Europe.
the changes noted above (see, for example, ENQA, 2008, 2012). The
question one may ask is whether the governments that once established
the agencies are also holding improvement and enhancement high on
their agenda. Examples can be given that show a very diverse picture.
Perhaps the picture is even so diverse that one could argue that the cur-
rent changes in European EQA are threatening the level of convergence
needed to maintain the European Higher Education Area. At least,
one could argue that national agendas in quality assurance currently
are showing greater dynamism than those found at the European level,
and that the policy discussions on EQA are becoming more domestically
focused. The implications of the current developments are not easy to
identify. The various agendas and purposes driving the changes noted
will most likely imply much more diversity in European EQA. Within
this diversity, some scenarios can still be predicted concerning the
future roles of agencies.
Scenario one may describe a situation where agencies can address
current dynamics within the current EQA paradigm, implying less
radical change mainly concerning methodology and on-going activi-
ties. The argument is that those agencies that are located in higher
education systems that can be characterised as ‘mature’ with respect to
their experience in quality assurance, but also those that are located in
countries where new policy demands are raised, are in a situation where
they need to demonstrate creativity and innovation – as a response to
effectiveness – efficiency, and accountability-driven agendas. The big
challenge for them is undoubtedly that they were originally designed
as a counterforce to the creativeness and dynamism of institutions,
programmes and educational offers. They were designed to create order,
system and trust through processes of standardisation. The Bologna
process and the existence of the ESG, the increasing networking within
quality assurance, and the growing collaboration across national
borders have in addition established so much consensus, norms and
‘tacit’ agreement as to how quality assurance should be conducted
and organised that a change toward innovation and creativity can be
difficult to achieve. There is a risk that too much consensus in the field
of EQA could hinder creativity and innovation. To succeed, the agencies
need to find a delicate balance between standardisation and innovation
where they must maintain the drive towards professionalisation and
standards in the area of quality assurance, but where they must be open
to more experimentation in how EQA is undertaken.
In practice, this means that agencies need to be more reflective in
their understanding and applications of some of the basic concepts
144 Quality Assurance in Higher Education
Conclusion
References
Brunsson, N., Jacobsson, B. and associates (2000) A World of Standards (Oxford:
Oxford University Press).
Brennan, J. and Shah, T. (2000) Managing Quality in Higher Education: An
International Perspective on Institutional Assessment and Change (Buckingham:
Open University Press).
D’Andrea, V.M. and Gosling, D. (2005) Improving Teaching and Learning in Higher
Education: A Whole Institutional Approach (Maidenhead: Open University Press).
Dill, D.D. and Beerkens, M. (2010) Public Policy for Academic Quality (Dordrecht:
Springer).
ENQA (2008) Quality Procedures in the European Higher Education Area and Beyond –
Second ENQA Survey (Helsinki: ENQA).
ENQA (2012) Quality Procedures in the European Higher Education Area and Beyond –
Third ENQA Survey (Brussels: ENQA).
Ewell, J. (2008) US Accreditation and the Future of Quality Assurance (Washington
DC: The Council for Higher Education Accreditation).
Fisher, E. (2004) ‘The European Union in the age of accountability’, Oxford
Journal of Legal Studies, 24(4), 495–515.
Greenwood, R., Raynard, M., Kodeih, F., Micelotta, E.R. and Lounsbury,
M. (2011) ‘Institutional complexity and organizational responses’, Academy of
Management Annals, 5(1), 317–71.
148 Quality Assurance in Higher Education
Introduction
149
150 Quality Assurance in Higher Education
Six trends
Advocacy for liberal education for its intrinsic value, education for the
life of the mind or intellectual development, is less often part of the
national dialogue.
For accreditation, this utilitarian emphasis has meant that assur-
ing and improving quality now involves additional attention to job
placement rates and the relationship between the debt that students
incur to pay tuition and fees and their subsequent earnings. It means
that accreditors are taking a closer look at the number of credits that
students earn and the length of time to gain a degree in relation to
employment. This means more accreditor attention to the economic
development role of a college or university rather than its intellectual
development role.
The likely is not the inevitable. There are steps that accreditation and
higher education can take to achieve a balance between the trends and
the historically effective features and values of accreditation.
Recent Trends in US Accreditation 157
whether or not these newer approaches to higher education can and will
focus on intellectual development.
Internationalisation of higher education and quality assurance is
a development in which the US enthusiastically participates. Over
time, colleges, universities and accrediting organisations will develop
means to assure effective communication about quality, whether or
not structures such as qualifications frameworks or rankings emerge
in the US.
A more desirable future is one of balance, that sustains the valu-
able features and core values of accreditation and quality review while
addressing the expectations of greater accountability and flexibility
inherent in the recent trends.
Summary
References
Council for Higher Education Accreditation (2012) CHEA Almanac of External
Quality Review (Washington, DC: Council for Higher Education Accreditation).
Council for Higher Education Accreditation (2013) Database of Institutions and
Programs Accredited by Recognized United States Accrediting Organizations, http://
www.chea.org/search/default.asp (accessed 22 September 2013).
European Association for Quality Assurance in Higher Education (2011) ENQA
Position Paper on Transparency Tools, 4 March.
Hazelkorn, E. (2011) ‘Questions abound as the college-rankings race goes global’,
The Chronicle of Higher Education, 13 March, http://chronicle.com/article/
Questions-Abound-as-the/126699/ (accessed 23 September 2013).
Lumina Foundation (2011) The Degree Qualifications Profile, http://www.lumina
foundation.org/publications/The_Degree_Qualifications_Profile.pdf (accessed
23 September 2013).
Orlans, H. (1975) Private Accreditation and Public Eligibility (Lexington, MA:
Lexington Books, D.C. Heath and Company).
Organisation for Economic Co-operation and Development (2013) Testing stu-
dent and university performance globally: OECD’s AHELO, http://www.oecd.
org/education/skills-beyond-school/testingstudentanduniversityperformance
globallyoecdsahelo.htm (accessed 23 September 2013).
StraighterLine (2013) Earn College Transfer Credit, http://www.straighterline.com/
how-it-works/credit-transfer/ (accessed 23 September 2013).
The Chronicle of Higher Education (2013) ‘College Completion: Who graduates
from college, who doesn’t and why it matters’, http://collegecompletion.
chronicle.com/about/ (accessed 23 September 2013).
The Education Trust (2013) College Results Online, http://www.edtrust.org/issues/
higher-education/college-results-online (accessed 23 September 2013).
Thrift, N. (2012) ‘The future of big ed’, The Chronicle of Higher Education,
6 December, http://chronicle.com/blogs/worldwise/the-future-of-big-ed/31122
(accessed 23 September 2013).
US Department of Education (2006) A Test of Leadership: Charting the Future of US
Higher Education: A Report of the Commission Appointed by Secretary of Education
Margaret Spellings, http://www.ed.gov/about/bdscomm/list/hiedfuture/index.
html (accessed 18 September 2013).
United States Department of Education, National Center for Education Statistics
(n.d.) College Navigator, http://nces.ed.gov/collegenavigator/ (accessed
21 September 2013).
US News and World Report (2013) http://www.usnews.com/rankings (accessed
23 September 2013).
Williamson, J. (2012) ‘Massive online open courses: What they are and how
they help students’, Distance Education.org., 30 April, http://www.distance-
education.org/Articles/Massive-Online-Open-C (accessed 21 September 2013).
11
Quality Assurance in Latin America
María José Lemaitre
South and West Asia. North America and Western Europe are the only
regions of the world where growth is below average, but this can be
explained by the high coverage already achieved in those regions. Most
OECD countries show participation rates of more than 50 per cent for
a single age cohort, and participation rates are also increasing in other
countries, although at a slower pace. Enrolment in Latin America and
the Caribbean grew from 8.4 million students in 1990 to 21.8 million
in 2008 (Brunner and Ferrada, 2011).
Diversification of provision
Diversification has different faces: the emergence of new institution
types, the multiplication of educational offerings within institutions,
the expansion of private provision and the introduction of new modes
of delivery. Among these, the growth of non-university sectors is recog-
nised by OECD as one of the most significant structural changes in
recent times. This in part is the result of a more innovative response to
the increasingly diverse needs of the labour market, but is also the result
of regional development strategies for increasing access to tertiary edu-
cation, or for educating a larger proportion of students at a lower cost
through the introduction of short programmes. However, not all these
new programmes are offered in different institutions. In many cases,
provision is diversified within institutions; thus, traditional universi-
ties are expanding their range of programmes, including short cycle or
vocational programmes.
Private provision has also expanded, and some countries (such as
Korea, Japan or Chile) have more than 7 per cent of their students
enrolled in private institutions.
Finally, more flexible modes of delivery are emerging everywhere.
Distance learning, online delivery of standard courses in face-to-face
programmes, small seminars and interactive discussions, part-time
courses and module-based curricula, continuing education, and non-
degree courses are all new means for addressing the new needs and
demands of students and the labour market.
of these students are the first generation in their families to reach tertiary
education, and the lack of social networks to support them also poses
new challenges for tertiary education institutions. These students have
different learning needs, which means new curricular and pedagogical
requirements, as well as different learning environments, which must
take into account the different perspective these students bring to their
educational experience.
These and other challenges have led governments to see the develop-
ment of quality assurance processes as a good solution. The question
is whether they will do it mostly through the establishment of strong
measures of regulation and control (in what has been called hard power)
or whether they will work with quality assurance agencies to promote
processes that while providing an adequate measure of accountability,
also promote institutional responsibility and self regulation (using a soft
power approach).
quality assurance systems, the search for sub regional arrangements and
the establishment of a regional network.
Sub-regional arrangements
It is interesting to note that there are two significant sub-regional
arrangements that seek to harmonise standards and procedures for qual-
ity assurance: that of the countries grouped under MERCOSUR – the
common market of the South – and the initiatives in Central America.
Note: C: Compulsory; V: Voluntary; V*: Voluntary, except for teacher training and medicine.
Source: Lemaitre and Mena, 2012.
Quality Assurance in Latin America 167
Central America
The Council for Higher Education in Central America (CSUCA), which
encompasses the public universities of the region, began work on the
development of a region-wide assessment process for higher education
with the support of German cooperation agencies in 1995. This process
developed basic standards for university programmes, trained hundreds
of academic staff and external reviewers, and is currently operating
mostly for the public universities in the region. Its main contribution
has been the introduction of a continuing concern for quality and its
regular assessment, and has been the basis for many further develop-
ments in the area of quality assurance.
Following this experience, in 2003 the Central American Council for
Accreditation (CCA) was established, with the dual role of assessing
168 Quality Assurance in Higher Education
Insternal
consistency Academic
field
(disciplinary
Institutional /professional)
management and
Academic staff
decision making Socio-
processes economic
environment
Resources External
consistency
with public quality assurance policies. They do not see any significant
improvements in teaching and learning, and in many cases they link
quality assurance practices to restrictions to innovation and complain
about a lack of consideration of institutional differentiation.
Leaders at the faculty and programme level, academic staff and
students, on the other hand, value highly the norms and practices of
quality assurance. They associate them with increased achievement of
stated goals and improvement in the quality of the service rendered by
the institution. In a striking contrast with the views of academic vice
rectors, they report significant improvements in the teaching and learn-
ing process.
Analysis by dimension
Impact on the higher education system All respondents recognised formal
quality assurance processes as significant regulatory mechanisms;
although in several countries, participation in accreditation is volun-
tary, it is encouraged through the use of different incentives (access to
public resources, restrictions to public employment of graduates from
non accredited programmes). At the same time, many respondents
emphasised that incentives must be followed closely to reduce the risk
of unanticipated effects.
There is a clear consensus on the overall positive impact of quality
assurance in the development of higher education: an increased con-
cern about the development of valid and reliable information systems,
which still need to be fine-tuned in order to take into account the actual
needs of different stakeholders. The provision of public information
is seen by all respondents as the duty of the government, with little
consideration of the institutional responsibilities associated with this.
In part this may be related to the perceived risk of marketing and
publicity being presented as information, and to the need to regulate
the information provided to the public.
There is a strong criticism of the application of homogeneous
standards to different types of institutions, thus not considering the
importance of different goals and objectives, functions, target groups or
general operational conditions. It is interesting to note that this criti-
cism is voiced simultaneously by public and private institutions.
Among the results of the study are a number of insights that can be
useful in the future development of quality assurance arrangements, for
governments, higher education institutions and quality assurance agen-
cies. These are summarised in the following paragraphs.
For universities
Information about institutional operation is important for improved
management. At the same time, there is a significant amount of work
involved in gathering, processing and updating data. Therefore, it seems
important to determine the kind of information needed to support
decision-making at the different institutional levels, and the cost-benefit
of its provision.
Improved management also includes the development of internal
quality management mechanisms: these involve linking assessment
results with programme and institutional planning, as well as embed-
ding the assessment of inputs, processes and outcomes into regular
institutional practice. In doing this, it becomes easier to recognise that
quality is a shared responsibility, and to involve internal stakeholders in
quality improvement processes.
Final comments
Latin American quality assurance schemes have been in place for two
decades, and have developed to respond to the needs of national higher
education systems, with different and relevant models. The experience
gathered during this time has been shared through the work of the
regional quality assurance network, RIACES, which has provided very
important opportunities for shared learning, for capacity building, and
most of all, for the development of a quality assurance community in
the region.
The overall view of universities in those countries with longer experi-
ence and more consolidated quality assurance processes is that these
processes have been effective, and have contributed significantly to the
recognition and improvement of increasingly complex and diversified
higher education systems.
At the same time, it is clear that the growth and development of
higher education, increased enrolment and institutional differentiation
pose new challenges that must be addressed by institutions and taken
into account in the revision of quality assurance processes.
The study that has been briefly reported in this chapter points to
significant lessons that can contribute to improved policy-making at
the national level; to changes in higher education institutions, both in
terms of new managerial arrangements and in teaching and learning
practices; and, most of all, to the need for updated and revised stand-
ards, procedures and practices of quality assurance agencies.
Higher education is a dynamic system – it cannot be served well
by quality assurance processes which are not ready to learn (and to
unlearn), to adapt and adjust to the changing needs of students, institu-
tions and society.
Quality Assurance in Latin America 177
Notes
1. ARCU-Sur added Veterinary Medicine, Dentistry, Nursing and Architecture to
the three initial programmes.
2. In addition to the original six countries, Venezuela, Peru, Ecuador and
Colombia were also included in ARCU-Sur.
3. The full report (in Spanish) can be found at www.cinda.cl: Aseguramiento de
la Calidad en Iberoamerica, Educacion Superior Informe 2012. A summary in
English can be obtained from cinda@cinda.cl.
4. The sample included four universities in each country, except Mexico, where
six universities were included.
References
Brunner, J.J. and Ferrada, R. (2011) Educación Superior en Iberoamérica. Informe
2011 (Santiago de Chile: CINDA).
CINDA (2007) Educación Superior en Iberoamérica. Informe 2007 (Santiago de Chile:
CINDA).
Ewell, P.T. (2008) U.S. Accreditation and the Future of Quality Assurance: A CHEA
Tenth Anniversary Report (Washington, DC: Council for Higher Education
Accreditation).
Lemaitre, M.J. and Mena, R. (2012) ‘Aseguramiento de la calidad en América
Latina: tendencias y desafíos’, in M.J. Lemaitre and M.E. Zenteno (eds),
Aseguramiento de la Calidad en Iberoamérica. Informe 2012 (Santiago de Chile:
CINDA).
Lemaitre, M.J. and Zenteno, M.E. (eds) (2012) Aseguramiento de la Calidad en
Iberoamérica. Informe 2012 (Santiago de Chile: CINDA).
OECD (2008) Tertiary Education for the Knowledge Society, Vol. 1 (Paris: OECD).
Part IV
Quality Assurance: The Actors’
Perspectives on Recent Trends
12
The Academic Constituency
Maria João Rosa
Introduction1
181
182 Quality Assurance in Higher Education
This loss of trust has had obvious consequences for quality assurance.
Government and society no longer seem to trust HEIs’ capacity to
ensure adequate standards of quality, seen in the movement from less
intrusive forms of quality assurance to accreditation (Amaral and Rosa,
2011). This can be seen as corresponding ‘to a change from a cycle of
trust and confidence in institutions into a cycle of suspicion’ (Rosa and
Amaral, 2012, p. 114).
Two other developments have contributed to the implementation of
‘harder’ forms of quality assurance. On one hand, the increasing use
by governments of market-like mechanisms as instruments of public
regulation (Dill et al., 2004) implies resorting to tools such as licensing,
accreditation and the public disclosure of the results of quality assess-
ment for consumer information (Smith, 2000). And at the level of the
European Commission steps are being taken to promote the develop-
ment of rankings and classification tools that are quite removed from
the academic endeavour. It remains to be seen if the quality enhance-
ment movement, another recent development taking place in European
higher education, which intends to devolve to HEIs the responsibility for
promoting education quality, will be capable of re-establishing societal
trust in institutions (Rosa and Amaral, 2012).
In Portugal all these trends are evident when one analyses the history
of quality assurance, which can be summarised in two major phases
(Rosa and Sarrico, 2012). The first (1993–2005) is marked by study
programmes’ assessment, mainly oriented towards quality improvement,
under the responsibility of entities representing HEIs (public and private
universities and polytechnics). An umbrella organisation – the Higher
Education Assessment National Council – coordinated the national
quality system and cooperated with those entities, being responsible
for the system’s meta-evaluation. The second phase, initiated in 2006
under the influence of European developments (namely, the Bologna
Declaration and compliance with the Standards and Guidelines for
Quality Assurance in the European Higher Education Area – ESG), is
characterised by the establishment of a system of assessing and accred-
iting study programmes and institutions (Law 38/2007) and of a new
and independent body for its coordination – the Higher Education
Assessment and Accreditation Agency.
The new system has been in operation since 2009 and accredita-
tion assumes a leading role within it as a way of ensuring that study
programmes and institutions accomplish the minimum requirements
for their official recognition. The new legal framework for quality
evaluation and accreditation also determines that institutions should
The Academic Constituency 183
Academics also rail against the way they see quality assurance being
implemented, namely the administrative and cost burden they tend to
associate with it, as well as with the fact of it being time consuming
(Laughton, 2003; Lomas, 2007; Newton, 2010; Papadimitriou et al.,
2008; Stensaker, 2008; Stensaker et al., 2011). They complain about the
high bureaucracy involved in quality assurance, the lack of time to deal
with its requirements and, inherently, the diversion of their attention
from the really important aspects of academic life, namely teaching and
research (Luke, 1997; Harvey, 2006; Newton, 2002).
Quality assurance’s core values also tend to be resisted by academics,
inducing non-compliance or even ‘sabotage’ of the process. To some
extent this can be explained by academics’ perception of the process as
being based on imposition and prescription and, thus, clashing with the
values that characterise academic culture, namely academic freedom
(Laughton, 2003; Lomas, 2007; Luke, 1997). Quality assurance is seen
as trying to grasp the ‘academic world’ through the language and ideo-
logy of managerialism and its business ethos, undermining academics’
‘privileged position through a new form of regulation’ (Bell and Taylor,
2005; Laughton, 2003, p. 317). It is also seen as altering the traditional
relationship between academics, inducing a situation where they relate
to each other more as ‘managers’ and ‘managed’ than as colleagues
(Bell and Taylor, 2005). These perceptions, especially evident among
academics not performing management tasks, often lead to the adop-
tion of instrumental and ritual strategies (conforming behaviours) to
‘keep the system running’ rather than truly to engage in it (Cartwright,
2007; Newton, 2000, 2002; Watty, 2006).
Furthermore, academics tend to be dissatisfied with quality assurance
procedures. These are seen as not entirely reliable, reductionist and inca-
pable of grasping the ‘essence’ of the educational process (Cartwright,
2007; Laughton, 2003; Westerheijden et al., 2007). Although this nega-
tive perception of quality assurance tends to dissipate whenever there is
the ‘impression that education is valued and rewarded’, ‘few have this
impression’ (Westerheijden et al., 2007, p. 307). Additionally, academics
also tend to show a lack of agreement on quality assurance’s capability
to induce improvements in their immediate working environment
(Watty, 2006).
Finally, academics’ resistance relates to two other issues. On the one
hand is the possibility of quality assessment results not being truthful
(Laughton, 2003; Westerheijden et al., 2007), since they tend to be
inflated (including by academics) and, thus, artificially influential
on the quality of a basic unit or institution; on the other hand is the
186 Quality Assurance in Higher Education
possibility that those results lead to an elitist bias within the higher
education system, with a tendency for the richest and oldest universities
to achieve a better position (Laughton, 2003).
But academics also adhere to quality assurance. This is especially true
in the case of processes and procedures directed more at institutions
as a whole, which are seen as less ‘burdensome and intrusive’ than
those directed at academics’ overall performance (Laughton, 2003,
p. 319). Academics, especially those assuming managerial roles, also
tend to agree with accreditation, seeing it as providing the opportunity
for institutions to reflect on their mission and purpose and ‘to join an
elite club’ (Bell and Taylor, 2005, p. 248). This occurs because accredita-
tion results can have an impact on institutions’ social image, playing a
preponderant role in students’ demand for and choice of an institution
or study programme.
Quality assurance is also seen by academics as enabling the develop-
ment of teaching and learning quality (namely educational provision
and curricula), hence benefiting students, as well as academic work
and decision-making processes (Huusko and Ursin, 2010; Kleijnen
et al., 2011). The improvement prompted by quality assurance on these
areas seems to be related to the fact that it enables ‘fair institutions’ as
its procedures ‘can expose flaws’ in institutional practices ‘promoted by
nepotism, patronage and gendered sponsorship’ (Morley, 2005, p. 423).
No. of % of
Academics Academics
Assessing the quality of N (1) (2) (3) (4) (5) Mean Median
higher education has
the following goal:
Note: N – number of answers. Answers collected on a five-point scale: (1) totally disagree; (5)
totally agree; (3) neutral.
190
(continued)
191
(continued)
192
Stensaker et al., 2011; Veiga et al., 2013). The same results were found
in the present study: it is possible to identify a number of quality
assessment objectives and purposes for which statistically significant
differences emerge between the answers of different groups of academics
(see Tables A.1 and A.2 in the appendix). These groups of academics
were defined according to their gender, type of institution, disciplinary
affiliation and previous experience with quality assurance activities.
Gender determines differences between responses to the goals and
purposes of higher education quality assessment for 2 out of the 5
proposed goals, as well as for 10 out of the 30 possible purposes. Most
of the questions for which differences have been identified relate to an
idea of quality assessment privileging the improvement and innovation
purposes and in all cases female academics tend to show a higher agree-
ment. Possible reasons for the identified differences may lie, as proposed
by Cardoso and colleagues (2013), in an essence of quality assurance
that somehow replicates the social gender of women as caregivers
(Morley, 2005); in the fact that women, who generally have less power
in academia than men, may use quality assurance as a way to enhance
their rights and power (Luke, 1997; Morley, 2005); and in an idea of
quality assurance as a process capable of promoting fairer institutions,
namely by helping solve potentially harmful practices and promoting
equity (Luke, 1997; Morley, 2005).
196 Quality Assurance in Higher Education
Although it was not possible find previous research dealing with exactly
the same type of respondents, both Stensaker and colleagues (2011)
and Rosa and colleagues (2006) highlight that institutional leaders and
managers usually have a more positive view of quality assurance pro-
cesses and mechanisms. Since these are typically the institutional actors
more involved with quality assurance, it seems that our results make
sense: previous experience with quality assurance may contribute to a
more optimistic view of it.
Institutional dynamic
This first group of factors that promote quality relates to aspects linked
to the HEI internal operation, including governance and management;
facilities, resources and services; and quality culture.
Governance and management includes factors related to the institu-
tion’s leadership, such as its legitimacy and the fact of having been
democratic elected, the capacity leaders and institutional managers
demonstrate for fulfilling their duties, or the institution’s internal
cohesion. Also referred to by academics were factors related to the insti-
tution’s governance system, namely the knowledge, skills and motiva-
tion towards quality assurance that governance and decision-making
bodies have, as well as the way an institution is internally organised.
Additional factors that promote quality and have been included in this
category include the management strategies and procedures defined for
the institution, including definition of goals, clarity, transparency and
the participation of staff in the institution’s functioning and internal
management decisions. Appreciation and recognition of academic staff
is the final issue put under governance and management; this refers to a
clear emphasis being placed on academics’ recruitment, promotion and
on the autonomy and academic freedom they hold.
The physical facilities and the resources the institution has (library,
scientific databases, technological and information resources), as well as
the internal organisation and the efficiency of the services it provides, are
also factors referred to by Portuguese academics as contributing to quality.
198 Quality Assurance in Higher Education
Institution’s mission
A second group of factors promoting quality at the level of basic units
relates to the core aspects and activities of HEIs’ mission, namely teach-
ing and learning and research. Teaching and learning encompasses the
factors referred to by academics that address the pedagogical student–
professor relation, such as the promotion of a good relationship between
teachers and their students or teaching staff commitment to and
support for students; the resources available to teaching and learning;
the way teaching and learning are organised within the institution, with
a special emphasis on the existence of autonomy in the management
of curricular unit programmes; and an emphasis on vocational training
in study programmes, namely in terms of internships.
The promotion of research quality and dissemination at the inter-
national level, the establishment of a strong link between teaching
and research, the integration of students in research activities and the
availability of resources, in terms of time and money, to research, are
all factors considered to promote quality that relate overall to the HEI’s
research mission.
Institution’s actors
The last group of factors that promote quality has to do with the institu-
tion’s actors (especially academic staff and students) and their relation-
ship. Relevance is given to the quality of the institution’s own actors
(academics, students and non-academic staff) and their interaction,
namely in terms of good performance, professionalism, involvement
and communication among all and a willingness to rise to new chal-
lenges. The high quality of incoming students or the actual students’
skills and competences are also referred to as factors that promote
quality.
Finally, reference is made to academics’ high qualifications, skills and
competences, as well as to their individual effort to improve them and
self-motivate; to academics’ teaching performance; and to academics’
The Academic Constituency 199
Concluding remarks
Another effort that should be made by those responsible for the defini-
tion and development of quality assurance systems (both at system and
The Academic Constituency 201
Acknowledgements
This research was supported by a grant from FCT – Fundação para a Ciência
e Tecnologia – under the framework of the project Perceptions of Higher
Education Institutions and Academics to Assessment and Accreditation (PTDC/
ESC/68884/2006). The author would like to thank all members of the project
team for their contribution to the results presented in this chapter, namely
Alberto Amaral, Amélia Veiga, Cláudia S. Sarrico, Cristina Sousa Santos, Diana
Dias and Sónia Cardoso.
– statistically significant for a 0.05 significant level; – statistically significant for a 0.01
significant level).
202
(continued)
The Academic Constituency 203
– statistically significant for a 0.05 significant level; – statistically significant for a 0.01
significant level).
Notes
1. This chapter is based on work conducted under the research project Perceptions
of Higher Education Institutions and Academics on Assessment and Accreditation
(PTDC/ESC/68884/2006). Part of the material used was already published in
Cardoso, Rosa and Santos (2013) and Rosa, Sarrico and Amaral (2012).
204 Quality Assurance in Higher Education
References
Amaral, A., Rosa, M.J. and Tavares, D.A. (2009) ‘Supra-national Accreditation,
Trust and Institutional Autonomy: Contrasting Developments of Accreditation
in Europe and the United States’, Higher Education Management and Policy,
21(3), 23–40.
Amaral, A. and Rosa, M.J. (2011) ‘Trans-national Accountability Initiatives: The
Case of the EUA Audits’, in B. Stensaker and L. Harvey (eds) Accountability
in Higher Education: Global Perspectives on Trust and Power (United Kingdom:
Routledge), pp. 203–20.
Amaral, A., Rosa, M.J. and Fonseca, M. (2013) ‘The Portuguese Case: Can
Institutions Move to Quality Enhancement?’, in R. Land and G. Gordon (eds)
Enhancing Quality in Higher Education: International Perspectives (London and
New York: Routledge), pp. 141–52.
Becher, T. and Trowler, P. (2001) Academic Tribes and Territories: Intellectual Enquiry
and the Cultures of Disciplines (Buckingham: Society for Research into Higher
Education and Open University Press).
Bell, E. and Taylor, S. (2005) ‘Joining the Club: The Ideology of Quality and
Business School Badging’, Studies in Higher Education, 30(3), 239–55.
Cardoso, S., Rosa, M.J. and Santos, C.S. (2013) ‘Different Academics’ Characteristics,
Different Perceptions on Quality Assessment?’, Quality Assurance in Education,
21(1), 96–117.
Cartwright, M. (2007) ‘The Rhetoric and Reality of “Quality” in Higher
Education – an Investigation into Staff Perceptions of Quality in Post-1992
Universities’, Quality Assurance in Education, 15(3), 287–301.
Clark, B. (1983) The Higher Education System: Academic Organisation in Cross-
national Perspective (Berkeley, CA: University of California Press).
Dill, D., Teixeira, P., Jongbloed, B. and Amaral, A. (2004) ‘Conclusion’, in
P. Teixeira, B. Jongbloed, D. Dill and A. Amaral (eds) Markets in Higher Education:
Rhetoric or Reality? (Dordrecht: Kluwer Academic Publishers), pp. 327–52.
Filippakou, O. and Tapper, T. (2008) ‘Quality Assurance and Quality Enhancement
in Higher Education: Contested Territories?’, Higher Education Quarterly, 62(1–2),
84–100.
GPEARI/MCTES (2010) Inquérito ao Registo Biográfico de Docentes do Ensino Superior
(REBIDES), www.pordata.pt (accessed 4 January 2011).
Halsey, A.H. (1992) Decline of Donnish Dominion: The British Academic Professions
in the Twentieth Century (Oxford: Claredon Press).
Harvey, L. (2006) ‘Impact of Quality Assurance: Overview of a Discussion
between Representatives of External Quality Assurance Agencies’, Quality in
Higher Education, 12(3), 287–90.
Harvey, L. (2009) A Critical Analysis of Quality Culture, http://www.inqaahe.org/
admin/files/assets/subsites/1/documenten/1241773373_16-harvey-a-critical-
analysis-of-qualityculture.pdf (accessed September 2010).
The Academic Constituency 205
207
208 Quality Assurance in Higher Education
0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%
equal partners (ESU, 2012; see Figure 13.1). Comparing the data from
2009 and 2012 shows that the number of countries with no student
participation or with very little participation has decreased among
the reported countries; on the other hand the number of countries
where students are equal partners in quality assurance has increased
significantly (ESU, 2007, 2009).
More than seven years have passed since ministers adopted the ESG
in Bergen (Bergen Communiqué, 2005). Bologna with Student Eyes 2012
testifies further to the improvement of the level of awareness about
the ESG among NUS. The number of respondents said to know about
the ESG in detail is up from 33 per cent in 2007 to 63 per cent in 2009
and 77 per cent in 2012 (ESU, 2007, 2009). All of the unions declare
that they support the ESG in general; nevertheless the ESU MAP-ESG
consultation report (ESU, 2012b) concluded that several improvements
must take place, including the phrasing and the meaning of QA termi-
nology used in ESG, which leaves room for ambiguous interpretation
(ESU, 2012b).
U-Map classification 4 8 10 7
U-Multirank ranking 4 11 12 3
0 5 10 15 20 25 30 35
Figure 13.2 Support of the national students’ unions for national and European
transparency tools (Bologna with Student Eyes, 2012)
Recent Quality Assurance Trends in Students’ Views 213
Despite many efforts and the recognition of the role students have in
QA in Europe, in practice students are not often asked to present their
views on higher education quality. Sometimes this is because students
are not able to articulate their views due to a lack of basic information
about their study programme, expected learning outcomes or existing
QA mechanisms, let alone about future QA developments. That is why
ESU has launched its QUEST project, which through research activi-
ties aims to identify students’ views on quality. The project focuses on
exploring the essential concern of students in Europe about the qua-
lity of education and will provide information and means for students
themselves to influence quality enhancement and assurance.
Using a survey, the project looks at what students perceive as important
aspects of quality and what they see as effective ways of bringing this
information to them. The survey was preceded by desk research and
national site visits to different countries to look for good and interesting
practice examples of students at the centre of quality enhancement
and assurance. The project seeks to identify what information students
think it is important they are given by HEIs. QUEST will therefore help
214 Quality Assurance in Higher Education
Conclusions
References
Bergen Communiqué (2005) The European Higher Education Area: Achieving the
Goals. http://www.ond.vlaanderen.be/hogeronderwijs/bologna/documents/
MDC/050520_Bergen_Communique1.pdf (accessed 22 September 2013).
Recent Quality Assurance Trends in Students’ Views 215
Introduction
216
Observations from the Agencies’ Perspective 217
compliance with the ESG has also been a precondition for being listed
in the European Quality Assurance Register for Higher Education. For
many agencies this meant not only that they ‘had to take their own
medicine’, but also that they found themselves in a core position as
drivers to implement the ESG at the institutional level. By applying
ESG part II the agencies implicitly requested the higher education
institutions to set up internal quality assurance procedures or modify
existing procedures in accordance with ESG part I. Having quality assu-
rance agencies as drivers of implementing structural and procedural
reforms which the European representation of the HEI had drafted and
adopted themselves is obviously not a healthy approach, and agencies
were not happy about being given this role. Another external driver
for the implementation of the ESG was the stocktaking exercise of the
Bologna Follow-up Group in preparation for the ministerial conferences
in Leuven/Louvain-La Neuf 2009, and Bucharest 2012, when the state
of development in quality assurance, and mainly regarding ESG, were
on the list of terms for which countries aimed at ‘green traffic lights’
(Rauhvargers et al., 2009; Education, Audiovisual and Culture Executive
Agency, 2012).
In conclusion, the underlying trend in quality assurance in the EHEA
in recent years was the implementation of the ESG at the institutional
and agency levels. This means applying the stakeholder approach to
quality of higher education and implementing shared principles such
as primary responsibility of HEI for quality assurance; independence
of quality assurance agencies; stakeholder, and in particular, student
involvement; and orientation towards the enhancement of quality
assurance.
Identifying the application of the ESG as an underlying trend does
not automatically mean that trends for applying certain approaches to
quality assurance such as accreditation or audit, can also be observed
at the programme or institutional level. The ESG don’t provide political
decision makers at the national level with a blueprint for designing
quality assurance systems or procedures. On the contrary: the authors
of the ESG deliberately refused to define procedures although they were
asked to do so by the ministers. The restriction to general principles
without prescriptive procedural rules for the design of the procedures
in detail gave leeway to national authorities to design their respective
quality assurance regimes, and several ENQA surveys about the develop-
ments in external quality assurance in the last ten years clearly show
that governments make use of this leeway. All the well-known proce-
dures, such as programme and institutional accreditation, programme
Observations from the Agencies’ Perspective 221
Even the ESG seem to give hints that such a development would be
almost natural. Its second part begins with the statement that ‘external
quality assurance procedures should take into account the effectiveness
of the internal quality assurance processes’. The guideline attached to
this standard points to the expected benefit for the higher education
institutions from such an approach:
Based on the findings so far, one can conclude that, apart from the con-
vergence of quality assurance in the EHEA based on the implementation
of the ESG and a greater emphasis on learning outcomes, no clear trends
can be observed at the procedural level, at least not yet. In this chapter
we try to give two explanations as to why this ‘colourful picture’ of
quality assurance cannot be surprising, although there are good reasons
for expecting certain trends, as mentioned previously.
The first explanation is based on the well-known ‘European
Implementation Dilemma’ (Serrano-Velarde and Hopbach, 2007, p. 37).
For quality assurance the same is true as for the other reform procedures
within the frame of the Bologna Process: the ministerial agreements are
not legally binding and neither does the EHEA provide a common legal
framework. Hence all reforms have to be ‘translated’ into national and
institutional policy and procedures. This not only means that national
legal frameworks and national political agendas have to be taken into
account, but also and more fundamentally so too do national authority
and prerogative. The ESG clearly do so and state: ‘The standards . . . do
not attempt to provide detailed guidance about what should be exam-
ined or how quality assurance activities should be conducted. Those are
matters of national autonomy, although the exchange of information
amongst agencies and authorities is already leading to the emergence
of convergent elements’ (ENQA, 2009a, p. 14). Hence, national frame-
works were and are critical for applying ESG to external quality assur-
ance (ENQA, 2009b, p. 3) Experience indicates that being in accordance
with national quality assurance policies and the priorities of its main
actors is particularly key for the successful implementation of the ESG;
so too is whether they fit into the national legal setting and cultural
traditions.
224 Quality Assurance in Higher Education
At the heart of all quality assurance activities are the twin purposes
of accountability and enhancement. Taken together, these create trust
in the higher education institution’s performance. Quality assurance
and quality enhancement are inter-related and describe a cycle that
allows an HEI to assure itself of the quality of its activities and to take
opportunities for continuous improvement. Such activities support
the development of a quality culture that is embraced by all: from the
students and academic staff to the institutional leadership and man-
agement. . . . The term ‘quality assurance’ is used in this document to
describe all activities within the continuous improvement cycle (that
is, assurance and enhancement activities).3
Notes
1. For case studies see Bernhard (2012).
2. See the report on the New Approach to Quality Assurance for 2011–2014:
http://www.hsv.se/download/18.328ff76512e968468bc80004249/1103R-
quality-evaluation-system-2011-2014.pdf (accessed 27 February 2013).
3. Unpublished draft of the revised ESG.
References
Bernhard, A. (2012) Quality Assurance in an International Higher Education Area: A
Case Study Approach and Comparative Analysis (Wiesbaden: Springer).
Brown, R. (2010) ‘The current Brouhaha about standards in England’, Quality in
Higher Education 16(2), 129–38.
Department of Business, Innovations and Skills (ed.) (2011) Higher Education:
Students at the Heart of the System, http://www.hepi.ac.uk/167-1987/Higher-
Education-Students-at-the-Heart-of-the-System-An-Analysis-of-the-Higher-
Education-White-Paper-.html (accessed 27 February 2013).
Education, Audiovisual and Culture Executive Agency (2012) The European Higher
Education Area in 2012: Bologna Process Implementation Report (Brussels: EACEA).
Costes, N., Crozier, F., Cullen, P., Griffol, J., Harris, N., Helle, E., Hopbach, A.,
Kekalainen, H., Knezevic, B., Sits, T. and Sohm, K. (2008) Quality Procedures
in the European Higher Education Area and Beyond: Second ENQA Survey (ENQA
Occasional papers 14), (Helsinki: ENQA).
Costes, N., Hopbach, A., Kekalainen, H., van Ijperen, R. and Walsh, P. (2011)
Quality Assurance and Transparency Tools, p. 12, http://www.enqa.eu/pubs_
workshop.lasso (accessed 27 February 2013).
ENQA (2009a) Standards and Guidelines for Quality Assurance in the European Higher
Education Area, 3rd edition (Helsinki: ENQA).
230 Quality Assurance in Higher Education
ENQA (2009b) ENQA position paper on quality assurance in the EHEA in view of the
Leuven meeting of ministers responsible for higher education of 28–29 April 2009,
http://www.enqa.eu/files/ENQA_position_paper%20%283%29.pdf (accessed
27 February 2013).
Harvey, L. (2006) ‘Understanding Quality’, in E. Froment, J. Kohler, L. Purser and
L. Wilson (eds), EUA Bologna Handbook: Making Bologna Work (Berlin: RAABE),
B4.1-1, pp. 2–26.
Hazelkorn, E. (2009) ‘The Emperor Has No Clothes? Rankings and the Shift from
Quality Assurance to World-Class Excellence’, in L. Bollaert, B. Carapinha,
B. Curvale, L. Harvey, E. Helle, H. Toft Jensen, T. Loukkola, B. Maguire,
B. Michalk, O. Oye and A. Sursock, A. (eds), Trends in Quality Assurance:
A Selection of Papers from the 3rd Quality Assurance Forum (Brussels: European
University Association), pp. 10–18.
Hopbach, A. (2012) ‘External Quality Assurance between European Consensus
and National Agendas’, in A. Curaj, P. Scott, L. Vlasceanu and L. Wilson (eds),
European Higher Education at the Crossroads: Between the Bologna Process and
National Reforms, Vol. 1 (Dordrecht: Springer), pp. 267–85.
Kristensen, B. (2010) ‘Has External Quality Assurance Actually Improved Quality
in Higher Education over the Course of 20 Years of the “Quality Revolution”?’
Quality in Higher Education, 16(2), 153–58.
Loukkola, T. (2012) ‘A Snapshot on the Internal Quality Assurance in EHEA’, in
A. Curaj, P. Scott, L. Vlasceanu and L. Wilson (eds), European Higher Education
at the Crossroads: Between the Bologna Process and National Reforms, Vol. 1
(Dordrecht: Springer), pp. 303–16.
Rauhvargers, A., Deane, C. and Pawels, W. (2009) Bologna Process Stocktaking
Report 2009. Report from working groups appointed by the Bologna Follow-up
Group to the Ministerial Conference in Leuven/Louvain-la-Neuve 28–29 April
2009, Brussels.
Reichert, S. (2009) Institutional Diversity in European Higher Education: Tensions
and Challenges for Policy Makers and Institutional Leaders (Brussels: European
University Association).
Serrano-Velarde, K. and Hopbach, A. (2007) ‘From Transnational Co-operation to
National Implementation’, in Hochschulrektorenkonferenz (ed.), The Quality
Assurance System for Higher Education at European and National Level. Bologna
Seminar Berlin, 15–16 February 2007, pp. 29–62.
Sursock, A. and Smidt, H. (2010) Trends 2010: A Decade of Change in European
Higher Education (Brussels: European University Association).
Trow, M. (1996) ‘Trust, Markets and Accountability in Higher Education’, Higher
Education Policy, 9(4), pp. 309–24.
Van Vught, F. (1994) ‘Intrinsic and Extrinsic Aspects of Quality Assessment in
Higher Education’, in D. Westerheijden, J. Brennan and P.A.M. Maassen (eds),
Changing Contexts of Quality Assessment: Recent Trends in Western European
Higher Education (Utrecht: Lemma B.V.), pp. 31–50.
Westerheijden, D., Berkens, E., Cremonini, L., Huisman, J., Khem, B., Kovac,
A., Lazetic, P., McCoshan, A., Mozuraityte, N., Souto-Otero, M., de Weert, E.,
White, J. and Yagci, Y. (2010) The Bologna Process Independent Assessment: The
First Decade of Working on the European Higher Education Area (Utrecht: Cheps,
Incher, Ecotec).
Part V
Conclusion
15
The Swiftly Moving Frontiers of
Quality Assurance
Alberto Amaral and Maria João Rosa
And in Portugal:
the final reports . . . very seldom offer clear basis for drastic decisions. . . .
the Minister has publicly complained . . .that the conclusions of the
reports of quality evaluation agencies were quite obscure. (Amaral
and Rosa, 2004, pp. 415–16)
The yearnings of the Prince may also endanger what so far has been
a major characteristic of European quality assurance systems, namely
their autonomy from both governmental and institutional interfer-
ence. Stensaker argues in this volume that that independence may be
considered ‘a potential hindrance for effective national policy agendas’
The Swiftly Moving Frontiers of Quality Assurance 235
A general conclusion from the debates undertaken in this book and the
conference from which it arose is the observation that quality assurance is
236 Quality Assurance in Higher Education
Rankings
Ranking is one of the new instrumentalities being developed with the
blessing of the European Ministers and the European Commission
(Amaral, present volume). In April 2009, at the Leuven ministerial con-
ference on the Bologna process, European ministers in charge of higher
education endorsed the use of ‘multidimensional transparency tools’ to
identify and compare the relative strengths of higher education systems
and their institutions (Leuven communiqué, 2009). ‘Multidimensional
transparency tools’ is just a ‘weasel word’ (Amaral and Neave, 2009)
for rankings, and students present at Leuven strongly opposed their
implementation. As described in Ivanova’s chapter, national unions
of students believe classifications of higher education institutions may
become a double-edged sword as they are likely to open the way for
the Saint Matthew effect (For unto every one that hath shall be given,
and he shall have abundance: but from him that hath not shall be
taken away even that which he hath). This theme is also approached in
Westerheijden’s chapter.
This chapter is dedicated to describing the new tools U-Map and
U-Multirank, and to explaining their rationalities. These tools aim to
eliminate some of the flaws of traditional ranking systems by compar-
ing only institutions similar in their missions and structures while
addressing a number of their most important dimensions (education,
research, knowledge transfer, and regional and international aspects).
As Westerheijden reports, these tools are apparently having already
a positive influence over the current leaders of the ranking business,
who have adapted their methodologies to try to eliminate some of the
revealed flaws.
Learning outcomes
The OECD has tried to implement a different and innovative approach
based on measuring learning outcomes (Dias and Amaral, present
volume). This ambitious project, very demanding in human and finan-
cial resources, was initiated as an attempt to create a PISA-like system
for higher education. Indeed, OECD’s capacity to shape expert opinion
may in part be attributed to the regular publication of cross-national
and comparative educational statistics and indicators, since the organi-
sation lacks both financial clout and legislative capacity to coerce its
member countries.
OECD proposed developing a method ‘to assess learning outcomes
on an international scale by creating measures that would be valid for
The Swiftly Moving Frontiers of Quality Assurance 239
Risk management
Risk management has attracted considerable interest, and is referred to
by a number of contributors to this book (Amaral, Neave, Stensaker and
Hopbach), as well as being the subjecy of two dedicated chapters (those
by McClaran and Raban). Neave refers the Swedish system, where a
risk-based approach is used as an ‘alert’ system that allows external exam-
inations to focus only on those institutions that show obvious difficul-
ties (Högskolverket, 2005). This coincides with Stensaker’s opinion that
the risk-based approach is a procedure for identifying programmes or
institutions at risk. Hopbach criticises risk-based approaches for placing
less emphasis upon the developmental dimension of quality assurance.
240 Quality Assurance in Higher Education
Quality enhancement
So far, only the quality enhancement approach seems to be trying to
restore trust in institutions, although it is not clear if it will succeed.
In the UK, Australia, New Zealand and the United States, experiments
with this approach may be seen as universities seeking to regain trust
by reasserting that quality remains their major responsibility whilst
the role of external agencies should be confined to quality audit. From
this perspective, quality enhancement repatriates responsibility for the
quality of learning processes back to the institution. External oversight
may thus come to rely on institutional audit and less on such intru-
sive forms of quality assessment as programme level accreditation,
endorsing a flexible, negotiated model of evaluation that by definition
is non-mandatory, and is shaped by those participating in the acts of
teaching and learning.
The theme of quality enhancement has also attracted a large
number of authors (see the chapters by Amaral, Neave, McClaran,
Raban, Stensaker, Rosa, Ivanova and Hopbach) and the book includes
one chapter dedicated to this subject by Saunders. Neave sees quality
enhancement as a third phase in the evolution of the evaluative state,
increasing accountability to students. Rosa argues that Portuguese
academics show a preference for the characteristics of quality enhance-
ment, rather than for accreditation systems or rankings, an observation
that is corroborated by other authors (Laughton, 2003; Newton, 2000;
Saunders, present volume). Raban holds a rather sceptical view on the
virtuous nature of quality enhancement, questioning whether it reflects
242 Quality Assurance in Higher Education
Quality agencies
Quality agencies are an important stakeholder and in Europe their
representative organisation has played a relevant political role namely
in defining the ESG and in setting EQAR. Hopbach’s chapter refers to
the fact that quality assurance procedures have remained more or less
stable over the last few decades although a number of different purposes
have been added. This raises the question of defining in the future
what is the main purpose of the quality assurance process and what is
the most appropriate procedure for it. This question is also addressed
244 Quality Assurance in Higher Education
in Lemaitre’s chapter, which refers to the need for updated and revised
standards, procedures and practices of quality assurance agencies, and is
echoed by that of Stensaker and Eaton, who stress the need for innova-
tion in external quality assurance and accreditation.
Agencies also have to deal with the problem of declining trust in the
positive and effective impact of their activities. This is a problem clearly
raised in Eaton’s chapter, which considers the dangers of increased
government control and the threat to the core values that accom-
pany accreditation resulting from the current focus on the utilitarian
in higher education. This recalls Cardinal Newman (1996), who was
fiercely against a utilitarian concept of higher education that ignored
the true and adequate objectives of higher education, including intel-
lectual training and the development of a pure and clear atmosphere
of thought.
the lesson that we might draw is that anyone wishing to import into
the academic domain a commercially derived approach to quality
management must respect the sensitivities of staff and the reali-
ties of university life if this approach is to have an impact beyond
those parts of our institutions that are responsible for their corporate
functions. (Raban et al., 2005, p. 54)
That is why both ‘trust – building on staff’s professional values and their
aspirations – and dialogic accountability are themselves preconditions
for enhancement, risk assessment and the effective local management
of risk’ (Raban et al., 2005, p. 50).
Martin Trow argues that claims to professional and personal respon-
sibility ‘were the basis on which academics in elite colleges and
universities in all countries formerly escaped most formal external
accountability for their work as teachers and scholars (Trow, 1996).
However, universities have rested too long on their claims for the spe-
cialism of their ‘unique’ attributes while their environment has changed
dramatically. Today society no longer understands university attributes
such as ‘academic freedom, the teaching and modelling of civic commu-
nities marked by civil discourse, dispassionate enquiry and community
The Swiftly Moving Frontiers of Quality Assurance 247
References
Alexander, L. (2008) United States Senate, Press release, 30 January, 2008.
Amaral, A. (2008) Quality Assurance: Role, legitimacy, responsibilities and means
of public authorities. In Weber, L. and Dolgova-Dreyer (eds), The legitimacy
of quality assurance in higher education (Strasbourg: Council of Europe Higher
Education Series No. 9), pp. 31–47.
Amaral, A. and Neave, G. (2009) ‘On Bologna, Weasels and Creeping
Competence’, in A. Amaral, G. Neave, C. Musselin and P.A.M. Maassen (eds),
European Integration and Governance of Higher Education and Research (Dordrecht:
Springer), pp. 281–99.
Amaral, A. and Rosa, M.J. (2004) ‘Portugal: Professional and Academic
Accreditation – The Impossible Marriage?’, in Schwarz, S. and Westerheijden,
D. (eds), Accreditation and Evaluation in the European Higher Education Area
(Dordrecht: Kluwer Academic Press), pp. 127–57.
Biagi, M. (2000) ‘The Impact of European Employment Strategy on the Role
of Labour Law and Industrial Relations’, International Journal of Comparative
Labour Law and Industrial Relations, 16(2), 155–73.
Birnbaum, R. (2001) Management Fads in Higher Education (San Francisco:
Jossey-Bass).
BIS (2011) Higher Education. Students at the Heart of the System (London: Stationery
Office).
Dehousse, R. (2002) ‘The Open Method of Coordination: A New Policy
Paradigm?’ Paper presented at the First Pan-European Conference on European
Union Politics, The Politics of European Integration: Academic Acquis and Future
Challenges, Bordeaux, 26–28 September 2002.
Eaton, J.S. (2007) ‘Institutions, accreditors, and the federal government:
Redefining their appropriate relationship’, Change, Vol. 39(5), 16–23.
European Commission (2009) Report on Progress in Quality Assurance in Higher
Education (COM (2009) 487 final) (Brussels: European Commission).
Futures Project (2001) Final Report of ‘Privileges Lost, Responsibilities Gained:
Reconstructing Higher Education’, A Global Symposium in the Future of Higher
Education, New York, Columbia University Teachers College, 14–15 June.
The Swiftly Moving Frontiers of Quality Assurance 249
251
252 Index
European Student Information Bureau quality, 26–7, 135, 156, 172, 175,
(ESIB), 69, 70, 207 182–3, 188, 198–9
European Students’ Union (SEU), 18, self-, 84, 184
207–8, 213, 218 indicators, 19, 28, 37, 40, 42, 56, 58–9,
European Association of Institutions 141, 154, 162, 173–4, 176, 236, 246
in Higher Education performance, 3, 16, 18, 42, 59, 62
(EURASHE), 18–9, 69, 70, 218 U-Map, 59, 61, 63
Evaluative State, 1–4, 13, 32–5, 37–46, U-Multirank, 62–3
233, 241 educational, 64, 72, 238
see also accreditation
feasibility study, 4, 5, 19, 21–2, 27, innovation, 53, 101, 119, 129, 144,
66–7, 75–84, 239 150, 153, 155–8, 246–7
see also AHELO and creativity, 45, 143, 146
Framework for Qualifications of the change and, 6, 7, 24, 241
European Higher Education EQA, 7, 142–3, 147, 172, 244
Area (FQ-EHEA), 68 purpose of, 26, 189, 193, 195, 200,
243
generic skills, 4, 21, 77–9, 81–3 quality enhancement and, 24, 99
Group of National Experts (GNE), 80
governance, 39, 99, 174, 197, 214, Latin America, 3, 7, 25, 160–1, 164,
216, 247 170, 237, 244
approaches to, 73, 245 countries, 165, 167–9
bodies, 16, 194, 197, 200 quality assurance, 165, 176
corporate, 24, 94 US and, 1, 6
institutional, 163, 244 leadership, 21, 54, 77
instrumentality of, 71, 72 academic, 7, 150, 156–8
(linked to NPM), 23, 26 institutional, 47, 145, 155–6, 163,
(linked to quality assurance), 137, 197, 224
141–2 learning outcomes, 68–70, 83, 140, 144,
strategies, 163, 244 173, 213, 219, 236, 245, 248
assessment of, 4, 21, 72–3, 78, 90,
Higher Education Funding Council for 238–9, 245
England (HEFCE), 24, 112–114 Bologna and, 67–8, 71, 74
consultation, 96, 108–10 cognitive, 20, 74
publications from the, 5, 89, 113–14 ESG, 22, 69, 223
implementation of, 70–1
improvement, 67, 98–9, 117, 122, intended, 22, 70, 219, 223, 245
130, 138, 172, 245 measurement of, 1, 4, 21, 27, 67,
continuous, 8, 109, 168, 183, 224 72, 74–5, 77–8, 236, 238
in teaching and learning, 172, 193 qualification framework, 69, 71
of higher education institutions, standardised, 20, 74
66, 189 student, 22, 74, 77–9, 81–2
of higher education systems, 7, 176 licensing, 14, 16, 165, 182
of student experience, 66, 77, 118 see also accreditation
of student involvement, 8, 109,
207, 210, 214 ministerial conference (and
purpose of, 8, 14, 22, 27, 111, communiqué)
142–3, 152, 165, 175, 182–3, Leuven, 19, 26, 69, 211, 220, 238
189, 193–5, 199, 237, 243, 246 London, 2, 18, 69, 70
254 Index
mapping, 19, 40, 139, 212 193–6, 199, 200, 222, 234, 241,
markets, 16, 17, 22, 89, 91, 146, 236 243, 245–247
Multiple Choice Question (MCQ), 82 assurance, 1–9, 13–15, 22–3,
Mercado Común del Sur 25–8, 41–2, 44–5, 54, 66–7, 70,
(MERCOSUR), 165, 167 89–97, 99, 100, 107, 109–11,
Massive Open Online Courses 113–14, 121–2, 135–7, 143,
(MOOCs), 26, 153–4, 244 147, 154–5, 162, 164–5,
motivation, 84, 189, 193–4, 196–7, 227 167–76, 183–6, 195, 197–200,
multidimensional, 4, 19, 27, 53, 58–9, 207, 211–12, 214, 216–18,
61, 64, 238 220–29, 234–37, 239–48
agencies, 3, 15, 136–7, 164, 168,
neo liberal, 18, 34 174–76, 208, 217–20, 224, 228,
neo liberalism, 38–41, 71 244–5
New Public Management, 13, 15, 22–3, external, 6, 66, 101, 111, 135, 198,
27, 34, 39–41, 138, 181, 233 217–29, 236, 241, 244–46
internal, 6, 100, 140, 147, 183, 198,
Organisation for Economic 218, 220, 222, 240, 245
Co-operation and Development risk-based, 5, 101, 107–8, 110–11,
(OECD), 1, 4, 20–2, 27–8, 66–7, 113, 222
71–8, 80–3, 160–1, 163, 238–9, enhancement, 1, 3–6, 9, 22–5,
245 27–8, 32–3, 35, 37–8,
41–6, 182, 199, 200,
performance, 19, 34, 40, 43, 53, 55, 213–14, 224, 226, 228,
73–4, 79, 100, 162, 186, 225, 240–3, 248
227, 244 quasi-markets, 17
(performance-based) funding, 162,
212, 226–7, 229 rankings, 4, 18–19, 21, 27, 53–6, 58,
expected, 40, 42 61–2
institutional, 4, 46, 94, 150, 157, field-based, 19, 27, 62–3
194, 224, 228 system, 19, 27, 55, 151, 154, 199,
research, 199 238–9, 243
teaching, 198 Iberoamerican Network for Quality
see also performances Assurance in Higher Education
performances, 57–9, 61–2 (RIACES), 168, 176
Performance Indicators of School risk, 6, 24–5, 46–7, 83, 88–9, 94, 95–7,
Achievement (PISA), 28, 72–3, 99–02, 129–30, 141, 145–6,
76, 213, 238–9 172–3, 240–1, 245–6
(-based), 5, 25, 88–9, 92–95,
Quality Assurance Agency (QAA), 100–02, 107–08, 110–11,
6, 22–5, 88–90, 92–7, 100–2, 113, 115, 141, 222, 236,
106–15, 119, 140, 222, 240 239–40
quality code, 95 risk analysis, 3, 32
Quality Enhancement Framework assessment, 25, 246
(QEF), 24, 117–23, 126–30, 242 factoring, 4, 33
qualification framework, 69, 70, 140, management, 1, 3–6, 24–5,
245 27–8, 41, 89, 94–8, 100–02,
quality 106, 236, 239, 240–41, 243,
assessment, 1, 3, 8, 13–18, 23, 25, 248
27, 38–9, 42, 90, 93, 181–9, Russell Group, 93, 94
Index 255
skills, 21, 26, 34, 43, 66–8, 145, 157, Technical Advisory Group (TAG), 78,
164, 193, 197–9, 208 80–4
discipline-related, 4, 21, 77 transparency, 4, 19, 53–7, 68–9, 73,
evaluating, 4, 20, 67 90, 113–14, 145, 151, 193, 197,
generic, 4, 21, 77–9, 81–3 209, 224, 228, 237–8, 246–7
intellectual, 155 tools 1, 18, 27, 55, 57, 211, 214,
outcomes, 20, 74 224, 226, 238, 246
problem-solving, 20, 74 trust, 3, 13–15, 27–8, 90–1, 93, 98
usable, 54 100–2, 135, 142–3, 163–4, 193,
stakeholder(s), 21, 26, 54–5, 58, 74–5, 208–9, 224, 233–6, 244, 246–7
77, 82–3, 91, 98, 118, 128, 172, loss of 3, 13, 22, 25, 27, 89, 181–2, 233
218–20, 222, 227–8 in institutions 14, 15, 22, 25, 89,
external, 16, 144, 163, 169 182, 199, 236, 240–1
internal, 169, 174–5 trust-based 150, 237
involvement, 9, 144, 214, 219, 228 Tuning, 21, 77
multi, 59
model, 220, 242–3 U-Map, 1, 3, 4, 19, 27, 53, 55–9, 61–4,
students, 1–3, 17, 25–6, 57, 75, 83–4, 212, 238, 246
89–91, 99, 100, 108–11, 113–15, U-Multirank, 1, 3, 4, 19, 53, 55–58,
118, 122–23, 126–27, 140–41, 61–4, 212, 238, 246
152–55, 157–58, 161–63, 198, US, 3, 21, 149, 151–5, 162, 235, 237,
207–11, 213–14, 224–25, 238–43 241, 244
capabilities, 4, 20, 67 accreditation, 6, 7, 149, 151, 154–5,
clients, 15, 17 158
learning experience, 27, 152, 242 accreditation agencies, 26
learning outcomes, 72, 77, 79
needs, 8, 248 value-added, 21, 77, 80
prospective, 54, 58, 61, 63
skills, 4, 68, 157, 198 West European Student Information
unions, 8, 9, 26, 109, 207, 209–10, Bureau (WESIB), 207
238–9 White paper, 5, 25, 88–90, 92–4, 96,
views, 8, 213, 214, 243 107, 110, 222, 240