Download as pdf or txt
Download as pdf or txt
You are on page 1of 257

Quality Assurance in Higher Education

Issues in Higher Education

Titles include:
Jürgen Enders and Egbert de Weert (editors)
THE CHANGING FACE OF ACADEMIC LIFE
Analytical and Comparative Perspectives
John Harpur
INNOVATION, PROFIT AND THE COMMON GOOD IN HIGHER EDUCATION
The New Alchemy
Tamsin Hinton-Smith
WIDENING PARTICIPATION IN HIGHER EDUCATION
Casting the Net Wide?
V. Lynn Meek
HIGHER EDUCATION, RESEARCH, AND KNOWLEDGE IN THE ASIA-PACIFIC
REGION
Guy Neave
THE EUROPEAN RESEARCH UNIVERSITY
Guy Neave
THE EVALUATIVE STATE, INSTITUTIONAL AUTONOMY AND RE-ENGINEERING
HIGHER EDUCATION IN WESTERN EUROPE
The Prince and His Pleasure
Maria João Rosa and Alberto Amaral (editors)
QUALITY ASSURANCE IN HIGHER EDUCATION
Contemporary Debates
Mary Ann Danowitz Sagaria
WOMEN, UNIVERSITIES, AND CHANGE
Snejana Slantcheva
PRIVATE HIGHER EDUCATION IN POST-COMMUNIST EUROPE
Sverker Sörlin
KNOWLEDGE SOCIETY VS. KNOWLEDGE ECONOMY
Bøjrn Stensaker, Jussi Välimaa, Clàudia Sarrico (editors)
MANAGING REFORM IN UNIVERSITIES
The Dynamics of Culture, Identity and Organisational Change
Voldemar Tomusk
THE OPEN WORLD AND CLOSED SOCIETIES

Issues in Higher Education


Series Standing Order ISBN 978–0–230–57816–6 (hardback)
(outside North America only)

You can receive future titles in this series as they are published by placing a
standing order. Please contact your bookseller or, in case of difficulty, write to us
at the address below with your name and address, the title of the series and the
ISBN quoted above.
Customer Services Department, Macmillan Distribution Ltd, Houndmills,
Basingstoke, Hampshire RG21 6XS, England
Quality Assurance
in Higher Education
Contemporary Debates

Edited by

Maria João Rosa


Assistant Professor, CIPES and University of Aveiro, Portugal

and

Alberto Amaral
Full Professor, A3ES, CIPES and University of Porto, Portugal
Selection, introduction and editorial matter © Maria João Rosa and
Alberto Amaral 2014
Individual chapters © Respective authors 2014
Softcover reprint of the hardcover 1st edition 2014 978-1-137-37462-2
All rights reserved. No reproduction, copy or transmission of this publication
may be made without written permission.
No portion of this publication may be reproduced, copied or transmitted
save with written permission or in accordance with the provisions of the
Copyright, Designs and Patents Act 1988, or under the terms of any licence
permitting limited copying issued by the Copyright Licensing Agency,
Saffron House, 6–10 Kirby Street, London EC1N 8TS.
Any person who does any unauthorized act in relation to this publication
may be liable to criminal prosecution and civil claims for damages.
The authors have asserted their rights to be identified as the authors of this
work in accordance with the Copyright, Designs and Patents Act 1988.
First published 2014 by
PALGRAVE MACMILLAN
Palgrave Macmillan in the UK is an imprint of Macmillan Publishers Limited,
registered in England, company number 785998, of Houndmills, Basingstoke,
Hampshire RG21 6XS.
Palgrave Macmillan in the US is a division of St Martin’s Press LLC,
175 Fifth Avenue, New York, NY 10010.
Palgrave Macmillan is the global academic imprint of the above companies
and has companies and representatives throughout the world.
Palgrave® and Macmillan® are registered trademarks in the United States,
the United Kingdom, Europe and other countries.
ISBN 978-1-349-47702-9 ISBN 978-1-137-37463-9 (eBook)
DOI 10.1057/9781137374639
This book is printed on paper suitable for recycling and made from fully
managed and sustained forest sources. Logging, pulping and manufacturing
processes are expected to conform to the environmental regulations of the
country of origin.
A catalogue record for this book is available from the British Library.
A catalog record for this book is available from the Library of Congress.

Typeset by MPS Limited, Chennai, India.


Contents

List of Figures and Tables vii


Notes on Contributors viii

1 Introduction 1
Maria João Rosa and Alberto Amaral
Part I The Frontier and Its Shifts
2 Where Are Quality Frontiers Moving to? 13
Alberto Amaral
3 Quality Enhancement: A New Step in a Risky Business?
A Few Adumbrations on Its Prospect for Higher Education
in Europe 32
Guy Neave
Part II New Challenges, New Instrumentalities
4 Transparency about Multidimensional Activities
and Performance: What Can U-Map and
U-Multirank Contribute? 53
Don F. Westerheijden
5 Assessment of Higher Education Learning Outcomes
(AHELO): An OECD Feasibility Study 66
Diana Dias and Alberto Amaral
6 Risk, Trust and Accountability 88
Colin Raban
7 Risk Management: Implementation 106
Anthony McClaran
8 Quality Enhancement: An Overview of Lessons
from the Scottish Experience 117
Murray Saunders
Part III Regional Setting
9 European Trends in Quality Assurance:
New Agendas beyond the Search for Convergence? 135
Bjørn Stensaker

v
vi Contents

10 Recent Trends in US Accreditation 149


Judith S. Eaton
11 Quality Assurance in Latin America 160
María José Lemaitre
Part IV Quality Assurance: The Actors’ Perspectives
on Recent Trends
12 The Academic Constituency 181
Maria João Rosa
13 Students’ Views on the Recent Developments in
Quality Assurance of Higher Education in Europe 207
Liliya Ivanova
14 Recent Trends in Quality Assurance? Observations
from the Agencies’ Perspective 216
Achim Hopbach
Part V Conclusion
15 The Swiftly Moving Frontiers of Quality Assurance 233
Alberto Amaral and Maria João Rosa

Index 251
List of Figures and Tables

Figures

4.1 U-Map ‘sunburst charts’ comparing two higher


education institutions 60
11.1 An operational definition for quality in
higher education 170
12.1 Academics’ overall perceptions of quality
assurance purposes 195
13.1 Overview of student participation in quality assurance
processes (ESU, 2012) 210
13.2 Support of the national students’ unions for
national and European transparency tools
(Bologna with Student Eyes, 2012) 212

Tables

4.1 U-Map’s dimensions and indicators 60


7.1 Provisional timetable for implementation
of risk-based quality assurance in England 112
11.1 Quality assurance mechanisms in Latin
American countries 166
11.2 Respondents and data collection mechanisms 171
12.1 Sample characterisation 188
12.2 Academics’ perceptions of different higher
education quality assessment goals 189
12.3 Academics’ perceptions of different higher
education quality assessment purposes 190
A.1 Statistically significant differences identified between
different groups of respondents regarding higher
education quality assessment goals 201
A.2 Statistically significant differences identified between
different groups of respondents regarding higher
education quality assessment goals 202
vii
Notes on Contributors

Alberto Amaral is a professor at the University of Porto and a researcher


at CIPES. He was the rector of Porto University from 1985 to 1998 and
is a former chair of CHER. He is a life-member of IAUP and member of
the Board of IMHE/OECD. At present he is the chair of the administra-
tion council of the Portuguese Assessment and Accreditation Agency for
Higher Education (A3ES).

Diana Dias is an associate professor at Universidade Europeia – Laureate


International Universities and researcher at Center for Research on
Higher Education Policies (CIPES). With an undergraduate degree in
Psychology, she holds a PhD in Educational Sciences. She is the author
of books on higher education and other scientific publications in
European and American journals on higher education and psychology.

Judith S. Eaton is president of the Council for Higher Education


Accreditation (CHEA). She has also served as chancellor of the
Minnesota State Colleges and Universities, as president of the Council
for Aid to Education, Community College of Philadelphia and
the Community College of Southern Nevada, and as vice president
of the American Council on Education. Dr Eaton has held full- and
part-time teaching positions at Columbia University, the University of
Michigan and Wayne State University.

Achim Hopbach obtained his PhD in History at the University of


Tübingen and afterwards held several positions in the field of higher
education politics and quality assurance in Germany before taking up
his current post as Managing Director of the Austrian Quality Assurance
Agency AQ Austria in 2012. He has been a member of the Hong Kong
Council for Accreditation of Academic and Vocational Qualifications
since 2005, and President of ENQA since 2009. He has published nume-
rous articles on the Bologna Process, quality assurance and qualifications
frameworks.

Liliya Ivanova completed a Master’s degree in International Economics


in 2013 at the University of National and World Economy in Sofia,
Bulgaria. She has been a member of the European Students’ Union
expert pool on quality assurance since 2010 and has worked as a quality
evaluator in Bulgarian higher education for the National Evaluation and

viii
Notes on Contributors ix

Accreditation Agency since the same year, as well as for the Institutional
Evaluation Programme of the European University Association since
2012. Liliya Ivanova was a member of the Executive Committee of the
European Students Union in 2012–2013.

María José Lemaitre is Executive Director of CINDA (a network of


universities in Latin America and Europe), and is the past president of
INQAAHE, a board member of the Iberoamerican Network for Quality
Assurance, RIACES, and currently serves on the Advisory Committee
for the CHEA International Quality Group. She was in charge of quality
assurance processes in Chile between 1990 and 2007, and has published
extensively and provided consultancy services in South and Central
America, the Caribbean, the Middle East, Africa, Eastern Europe and
Southeast Asia.

Anthony McClaran has been Chief Executive of QAA since October


2009 and was previously Chief Executive of UCAS. His career has
included senior academic administration and management posts at the
Universities of Warwick and Hull. Anthony has held numerous govern-
ance positions across the school and university sectors. He is a Freeman
of the Company of Educators, a Member and Trustee of the Honourable
Company of Gloucestershire, a Trustee of the Summerfield Trust and
Chair of All Saints’ Academy in Cheltenham.

Guy Neave is Director of Research at CIPES, Portugal, Professor Emeritus


of CHEPS, Twente University, the Netherlands, and Foreign Associate of
the US National Academy of Education. He has been Joint Editor-in-Chief
with the late Bob Clark of The Encyclopedia of Higher Education (1992)
and The Complete Encyclopedia of Education (CD ROM 1998), and with
Alberto Amaral Higher Education in Portugal 1974–2009 (2011). He was
Founder Editor of Higher Education Policy (1988–2006).

Colin Raban has held senior management positions in several UK


universities, having held responsibility for academic development, quality
management and the enhancement of academic practice. He led a
national project on risk-based approaches to academic quality assurance
and he has served since 1997 as a reviewer for the UK Quality Assurance
Agency. He is an emeritus professor of the University of Derby and
he provides consultancy services to higher education institutions and
national agencies within the UK and overseas.

Maria João Rosa is an assistant professor at the Department of


Economics, Management and Industrial Engineering at the University
x Notes on Contributors

of Aveiro, Portugal. She is also a researcher at CIPES. Her main research


topics are quality management and quality assessment in higher educa-
tion institutions. She is a member of CHER and of the executive com-
mittee of EAIR.
Murray Saunders is Co-Director of HERE@lancaster, and Professor of
Evaluation in Education and Work. He has acted as a consultant to, and
undertaken a wide range of evaluation projects for many national and
international agencies in a variety of international contexts. He is vice
president of the IOCE (International Organisation for Cooperation in
Evaluation) and associate editor of Evaluation. He is currently on the
executive committee of EvalPartners, which aims to enhance the role of
evaluation associations’ participation in civil society worldwide.

Bjørn Stensaker is Professor of Higher Education at the University of


Oslo. He is also a research professor at NIFU, the Nordic Institute for
Studies in Innovation, Research and Education. He has a special interest
in issues concerning quality assurance and has written extensively on
the topic in various international journals and books.

Don F. Westerheijden is a senior research associate at the Center for


Higher Education Policy Studies (CHEPS) of the University of Twente,
where he coordinates research on quality management and is involved
in the coordination of PhD students. Don publishes on quality assur-
ance in higher education in the Netherlands and Europe, its impacts,
and on transparency tools (U-Map, U-Multirank). He co-designed the
CRE/EUA Institutional Evaluation Programme, the first international
institutional quality review, and led the independent assessment of the
Bologna Process in 2009/2010.
1
Introduction
Maria João Rosa and Alberto Amaral

The main objective of this collection of studies is to open up a critical


debate on recent changes and trends in quality assessment systems that
will be useful both for those responsible for quality agencies and those
who periodically come under the scrutiny of quality agencies.
This book presents a critical analysis of contemporary developments
in the quality assurance of higher education, their advantages, their
benefits and their possible unintended consequences. Special emphasis
is given to new instrumentalities such as the U-Map and U-Multirank
transparency tools, an initiative supported by the European Commission
and the Ministers of Education; the AHELO (Assessment of Higher
Education Learning Outcomes) project led by the OECD (Organisation
for Economic Co-operation and Development) for the measurement of
learning outcomes; the risk management theoretical framework and
the current state of its implementation in England; and the quality
enhancement approach. By doing so we have highlighted the emerg-
ing and most recent trends in quality assurance, providing as well an
opportunity to compare trends in Europe with those in the US and
Latin America while weighing the views and accounts of different actors
and interests: academics, students and quality agencies.
Discussing and analysing contemporary debates on this topic poses
an obvious problem of up-to-dateness once the book finally gets out to
its readers. The field of quality assurance has indeed been quite active
since the emergence of the ‘Evaluative State’ in the late 1980s (Neave,
1988). From that moment on, the development of quality assurance in
Europe was fast, as Schwarz and Westerheijden (2004) have reported:
while at the beginning of the 1990s only about 50 per cent of European
countries had initiated quality assessment activities, by 2003 all
countries except Greece had entered some form of supra-institutional
1
2 Quality Assurance in Higher Education

assessment. Furthermore, the Bologna Declaration (1999) and the


emphasis it gave to the need for developing comparable criteria and
methodologies for assuring quality in higher education also contrib-
uted to new developments in the field, the more prominent prob-
ably being the establishment of the European Association for Quality
Assurance in Higher Education (ENQA); the adoption, at the 2005
Bergen Ministerial Conference, by the European Ministers Responsible
for Higher Education of the European Standards and Guidelines for Quality
Assurance in the European Higher Education Area (ESG) and the establish-
ment of the European Quality Assurance Register for Higher Education
(EQAR) endorsed by the Ministers at the 2007 London Conference.
In the aftermath of these developments new ones have come to the
fore. These are analysed and discussed in this book with the aim of build-
ing some common understanding about what the future of this area may
well be. The authors of the book examine the views of the main actors
in the quality nexus: national agency executives, policy analysts, and
academics and students currently and actively engaged in exploring and
laying down the future tasks to be taken up by quality agencies as well as
those in higher education institutions responsible for developing quality
procedures. As Neave rightly states in his chapter in this book, we intend
to step aside from ‘policy as action’ and instead provide an opportunity
for the reader to spend some time examining what the main construct
for operationalising and developing quality, efficiency and enterprise
(Neave, 1988) – namely, the Evaluative State – has achieved, weighing up
the moving frontiers of quality assurance in higher education.
Despite the fact that the recent developments in the area of quality
presented in this book seem to pull quality assurance into different
directions, some common elements have emerged from the different
perspectives outlined in individual chapters. In the final chapter we
review the main aspects of the contributions of the different authors in
order to collect and further develop those emergent common elements.
We call the reader’s attention to some of the most interesting ideas
expressed during the 2012 A3ES and Cipes Conference held in Porto,
while at the same time analysing some challenging questions that con-
stitute interesting issues for the future development of quality assurance
in higher education.

Brief summary of the chapters

This volume comprises four parts. The first two chapters (by Alberto
Amaral and Guy Neave) provide a broad panorama of recent developments
Introduction 3

of quality assurance and of the emergence of the Evaluative State. The


second part of the book (chapters by Don Westerheijden, Alberto
Amaral and Diana Dias, Anthony McClaran, Colin Raban and Murray
Saunders) focuses on new challenges and instrumentalities being
developed – U-Map and U-Multirank, the AHELO project, risk man-
agement and quality enhancement. The third part (chapters by Bjorn
Stensaker, Judith Eaton and Maria-José Lemaitre) analyses recent
developments in three different regional settings (Europe, the US and
Latin America). The fourth part (chapters by Maria João Rosa, Liliya
Ivanova and Achim Hopbach) presents a view of quality assurance
based on the perspectives of three different constituencies (academics,
students and quality assurance agencies). A final chapter presents the
main findings and conclusions.
In Part I, Alberto Amaral’s chapter, puts forward the framework on
which the remaining chapters are based. The chapter describes the rise
of quality concerns in higher education and the emergence of quality
as a public issue. It then discusses the consequences of the loss of trust
between higher education institutions and the State and society, which
played an important role in determining the major characteristics of
current quality assessment systems, namely the movement towards
accreditation schemes. The increasing use of market-like mechanisms as
instruments of public policy is also referred as a trend giving legitimacy
to State intervention – under the guise of consumer protection – through
the introduction of an increasing number of compliance mechanisms,
including performance indicators and measures of academic qual-
ity, which are transforming quality assurance into a compliance tool.
Finally the chapter introduces developments – U-Map and U-Multirank,
AHELO, Risk Management and Quality Enhancement – that will be
dealt with in the second part of the book, establishing the grounds for
the debates currently taking place around quality assurance in higher
education.
Guy Neave’s chapter discusses more thoroughly what quality enhance-
ment and the advent of risk analysis may both bring as new and signifi-
cant additions to the instrumentality of the Evaluative State. Using the
Evaluative State as the basic analytical framework opens an alternative
interpretation to the usual technical and metrical perspective that tends
to predominate in the main literature of quality assurance and accredi-
tation. By setting out the development of the Evaluative State in Europe
in four broad chronological stages – origins, scope and purpose; refining
procedures, re-defining ownership; quality enhancement: an evolution-
ary dimension; and higher education as a risky business – the author
4 Quality Assurance in Higher Education

interprets both quality enhancement and risk factoring as successive


stages that place new interpretations and open the way to new insights
on the Evaluative State. However, national systems display considerable
variations in timing, rationale and in the purpose they assign to quality
assurance mechanisms (in this respect particular attention is paid to
developments in UK, France and Portugal, which present significant
variations). Finally the author briefly explores possible implications
that may arise from moving forward on quality enhancement and risk
management.
The second part of the book discusses the new challenges faced by
quality assurance by introducing the new instrumentalities that are
emerging to deal with them. Don Westerheijden’s chapter discusses the
new ‘instruments of transparency’, U-Map and U-Multirank, currently
under development in Europe as multiple tools for different users, pack-
aged within single ‘engines’. To understand their difference from current
rankings, the chapter investigates the basic concept of diversity of higher
education before reflecting on some basics of process analysis. These
conceptual considerations show that the activities and performance of
higher education institutions are multidimensional, and that current
rankings favour largely a single dimension. In effect, they do not take
account of the ensuing institutional horizontal diversity. Finally, U-Map
and U-Multirank are critically analysed and their contribution to a more
inclusive conceptualisation of quality of higher education, is discussed.
The chapter by Alberto Amaral and Diana Dias describes this
OECD-implemented project and discusses the results of its feasibility
study, very recently disclosed. Douglas Bennett (2001) argues that a
feasible strategy for measuring the quality of HE will consist of assess-
ing outcomes, by evaluating the skills and capabilities students have
acquired as they graduate (or shortly after) or the recognition they
gain in further competition. This is the road the OECD has apparently
been trying to follow and that is discussed by Amaral and Dias. For
the OECD, the ‘Assessment of Higher Education Learning Outcomes
(AHELO) is a ground-breaking initiative to assess learning outcomes
on an international scale by creating measures that would be valid
for all cultures and languages’ (OECD, 2009). OECD has launched a
‘feasibility study’ of AHELO that includes measuring learning outcomes
in terms of generic skills and discipline-related skills (in engineering and
economics). The final report of this exercise was recently published and
a public debate of its conclusions took place in March 2013. In Amaral
and Dias’s chapter the methodology of the feasibility study and its main
conclusions are presented and critically analysed.
Introduction 5

The next two chapters, by Anthony McClaran and Colin Raban,


analyse emerging developments related to the use of risk management
in quality assurance processes. Risk management is a technique imported
from business. It identifies, assesses and assigns priorities to risks the
better to minimise, if not to eliminate, the impact of untoward, threat-
ening or negative factors in institutional development. The chapter
by McClaran describes and analyses the implementation in England
of a more risk-based approach to quality assurance, following the issue
of the 2011 White Paper on Higher Education, Students at the Heart of
the System. This government document was followed by another, the
technical consultation, A Fit-for-Purpose Regulatory Framework for the
Higher Education Sector, which examined what changes in procedures,
powers and duties were required to implement the reforms proposed
in the earlier White Paper, including risk-based quality assurance. The
government response indicated a risk-based approach as the most
desirable means of regulating higher education in England. Following
this, the steps to implementation moved forward. The chapter analyses
the implementation process and its timetable in detail, presenting the
aspiration that these changes should provide a clearer understanding of
the changing landscape of higher education provision in England and
the wider UK, safeguarding its reputation and quality for the future.
According to the new approach the risk each institution faces ought
compulsorily to be externally assessed and the level of risk detected
should determine the frequency of reviews by the Quality Assurance
Agency: ‘low risk’ institutions face full institutional reviews less fre-
quently than either new providers or institutions whose provision is
deemed of lower quality.
How far it is feasible – and desirable – to combine a risk-based quality
assurance system with a genuine quality enhancement approach,
remains debatable. This is an issue addressed from different perspec-
tives by many of the contributors to this book. Raban discusses the
theme thoroughly in his chapter, arguing that in discharging their
responsibilities for assuring the quality and standards of their provision,
universities should employ the ideas of risk and risk management in
ways that are very different from the proposals set out in the White
Paper and in recent publications from the HEFCE (Higher Education
Funding Council for England) and QAA (Quality Assurance Agency).
A risk management approach, possibly any form of internal quality
assurance, will not work unless there is a culture that accepts risk,
encourages staff to disclose risks and is open to the frank exchange of
information and ideas on the management of risk. It would be difficult
6 Quality Assurance in Higher Education

to sustain such a culture if staff were to believe that frank disclosure


would leave their institutions exposed in the face of external scrutiny.
In this respect, recent developments in external review methods do not
augur well.
The Scottish approach stands apart from its counterpart in England.
In this case the predominant view in higher education is to associ-
ate risk management with quality enhancement on the grounds that
enhancement is the outcome of change and innovation. Both change
and innovation, however, involve elements of risk. Accordingly, in
Scotland, the individual institution is expected to manage risk and
thus provide reasonable safeguards against it in the interest of students.
In his chapter Murray Saunders provides an overview of the Scottish
experience with quality enhancement, based on an analysis of the
policy intervention that set it up, which was done using an evalua-
tive research approach. The Quality Enhancement Framework had its
inception in 2003 and has been coordinated by the Scottish Funding
Council with the participation of all Scottish universities. It aims to
improve university learning, teaching and assessment, establishing
an integrated approach that emphasises ‘enhancement’ rather than
‘assurance’. According to the author, the approach was understood as
a positive departure from assurance-based engagement between the
Scottish universities and the UK-based national framework. It is also
considered to be owned by the higher education community, or at least
by senior academic managers. Nevertheless the change process it entails
was complex, involving several areas of tension and implementation
challenges.
With the goal of offering the reader an opportunity to compare trends
in Europe with those in the US and Latin America, Part III of the book
provides a setting for the most recent developments taking place in
these three regions of the world. Starting with Europe, the main argu-
ment of Bjorn Stensaker’s chapter is that European external quality
assurance (EQA) is developing in a complex way, and that we may cur-
rently be at a crossroads concerning how this activity is to be organised
in the future. Due to developments taking place at the national level,
different scenarios can be developed as to the future of EQA. While
some of the scenarios may be interpreted as quite gloomy – at least
from an agency point of view – the situation should not be interpreted
in a deterministic way. There are many possibilities for developing
EQA beyond the trends and scenarios laid out in the chapter. However,
alternative routes require joint actions in which national authorities,
agencies and higher education institutions all have a role to play.
Introduction 7

At the national level, authorities need to develop a more nuanced view


on the use and purpose of EQA. The European Standards and Guidelines
should not be seen as a hindrance for national policy-making,
although one can suspect that this is the case in some countries. While
standardisation indeed has brought European EQA forward in many
respects, there is currently a need for some innovation to take EQA to
the next level.
In the next chapter, Judith Eaton examines three dimensions of US
accreditation: what it has been, recent trends affecting its operation and
what its future is likely to be. She concludes by contrasting this likely
future with a more desirable scenario. She urges both higher education
and accreditation communities to work together in designing a future
of the desirable rather than the likely. The probable future for accredi-
tation, if current trends are not modified, is sobering. Accreditation
is less likely to be the dominant method of judging academic quality,
and rather just one among a number of voices judging quality – voices
devoid of academic expertise and experience. Increasingly, accreditation
will be controlled by government, with implications for the independ-
ence of not only accreditation, but also colleges and universities. Of
paramount significance, the core values accompanying accreditation
may themselves risk being reduced or transformed. This likely future
need not prevail, however. If the accreditation and higher education
communities work together, they may hold government influence and
calls for accountability in balance by sustaining institutional and fac-
ulty academic leadership for accountability as framed by the academy.
They can further engage innovation so that creative change in higher
education is accompanied by a commitment to a vision of intellectual
development of all students.
Finally María-José Lemaître introduces the case of Latin America.
Quality assurance schemes have been in place in this region for two
decades, and have developed around different models to respond to
the needs of national higher education systems. The overall view of
universities in those countries with longer experience and more consoli-
dated quality assurance processes is that they have been effective, and
have contributed significantly to the recognition and improvement of
increasingly complex and diversified higher education systems. Yet it is
clear that the growth and development of higher education, increases in
enrolment, and institutional differentiation pose challenges that higher
education must address and take into account in its revision of quality
assurance processes. The study briefly reported in this chapter points to
significant lessons, which can contribute to improved policy making at
8 Quality Assurance in Higher Education

the national level; to changes in higher education institutions, both in


terms of new managerial arrangements and in teaching and learning
practices; and, most of all, to the need for updated and revised standards,
procedures and practices of quality assurance agencies. Higher edu-
cation is a dynamic system – it is not served well by quality assurance
processes that are not prepared to learn (and to un-learn), to adapt and
adjust to the changing needs of students, institutions and society.
The last part of the book offers different actors’ perspectives on recent
trends, presenting the views of the academic and student’s constituen-
cies, as well as the views of the agencies. The chapter by Maria João
Rosa explores academics’ perspectives on higher education quality
assessment based on the answers given by a sample of Portuguese aca-
demics to an online questionnaire designed to investigate their degree
of support towards quality assessment goals and purposes. Overall the
analysis performed reveals that Portuguese academics tend to support
the majority of goals and purposes of quality assessment, although
more support is given to quality assessment mechanisms that privilege
improvement over control. Portuguese academics indeed seem to prefer
a formative type of quality assessment that promotes self-reflection and
knowledge, and the continuous improvement of teaching and learning.
Additionally the results show that academics’ perspectives are not homo-
geneous, and depend to a certain extent on their own characteristics,
such as gender, type of institution they belong to or scientific affiliation.
The results presented are especially relevant for those working in both
higher education institutions and governmental agencies, since they
may contribute to the design of quality assurance systems that academics
are more likely to support and are therefore more likely to be success-
ful; that is, more likely to contribute effectively to improvements in the
quality of higher education institutions and systems.
Liliya Ivanova’s chapter offers the views of students. The Bologna
Process holds students to be competent and constructive partners in
the joint shaping of the higher education experience. The European
Students’ Union is an active advocate of student participation in QA
processes, and provides expertise in QA. In recent years, QA mechanisms
have been constantly developing. Since 2005, student involvement has
improved at all levels, but there is still room for improvement. While
new approaches to QA have emerged, there is no guarantee or proof
they will lead directly to better quality of higher education. Therefore
the principles of fitness for purpose and genuine involvement of all
stakeholders should be applied and be developed in the framework of
the European Standards and Guidelines. National unions of students
Introduction 9

consider that classifications of HEIs and the linking of QA outcomes


directly to funding decisions may become a double-edged sword in the
re-allocation of resources. Instead of increasing the efficiency of HE
funding, some critics argue that such reforms could jeopardise the quality
of some institutions more than others.
To finalise this part, the chapter by Achim Hopbach presents the
perspectives of quality assurance agencies on recent trends in quality
assurance. Quality assurance emerged in a context of massification,
financial constraints, deregulation and accountability. It was an attempt
to resolve quality problems and to serve as a steering mechanism for
HE systems, which led to the traditional purposes of quality enhance-
ment and accountability. However, in the European Higher Education
Area, variety and steady change are key features of quality assurance,
though the emergence of a European unified quality assurance system is
unlikely given the diversity of national agendas. Purpose and design of
quality assurance systems are both highly dependent on national con-
text irrespective of European Study Guidelines, which is in keeping with
the open coordination character of Bologna. Convergence is unlikely.
Rather, quality assurance is becoming ‘professionalised’ at the same
time as there is a de-coupling of discourses within the quality assurance
community from discourses in the political arena.
We hope that the multiple accounts and perspectives broached in this
volume will offer an opportunity to open a debate that is both enlight-
ening and clarifying in shaping the issues that the shifting frontiers
of quality pose on four fronts both for higher education and for the
nations engaged in its advancement.

References
Bennett, D. (2001) ‘Assessing Quality in Higher Education’, Liberal Education,
87(2), 1–4.
Neave, G. (1988) ‘On the Cultivation of Quality, Efficiency and Enterprise: An
Overview of Recent Trends in Higher Education in Western Europe 1986–1988’,
European Journal of Education, 23(2/3), 7–23.
OECD (2009) Assessment of Higher Education Learning Outcomes (Paris: OECD).
Schwarz, S. and Westerheijden, D. (2004) Accreditation and Evaluation in the
European Higher Education Area (Dordrecht: Kluwer Academic Press).
Part I
The Frontier and Its Shifts
2
Where Are Quality Frontiers
Moving to?
Alberto Amaral

Introduction

Neave has argued that ‘quality is not “here to stay” if only for the self-
evident reason that across the centuries of the university’s existence in
Europe, it never departed’ (Neave, 1994, p. 16), and that evaluation is
‘an intrinsic part of policy making’ (Neave, 1998, p. 265). Indeed, quality
has been a permanent concern of universities from the early days of
their foundation.
In the Middle Ages it was already possible to distinguish three
major models of quality assurance. The old universities of Oxford and
Cambridge were self-governing communities of scholars that had the
right to remove unsuitable masters and to co-opt new members using
the equivalent of peer review mechanisms. The University of Paris,
where the chancellor of the cathedral of Notre Dame had the power to
rule on the content of studies, might be seen as the archetype of quality
assessment in terms of accountability. And the model of the University
of Bologna, ruled by students who hired the professors on an annual
basis, controlling their assiduity and the quality of their teaching, might
be seen as an extreme example of the principles presently in vogue of
customer satisfaction.
However, it was after the early 1980s that quality became a public
issue, giving rise to what Neave (1996) denominated the emergence
of the Evaluative State. This development can be explained as conse-
quence of a number of convergent factors such as massification – which
created much more heterogeneous higher education systems in terms
of institutions, students and professors – the increasing role of market
regulation, the emergence of new public management and a loss of trust
in higher education institutions and their professionals.

13
14 Quality Assurance in Higher Education

Initially an almost exclusive concern of the academics, quality


progressively became a matter of public concern in the 1980s and
1990s, with the two main objectives of quality assessment being quality
improvement and accountability. The balance between these two objec-
tives lies more on the side of improvement whenever academics have a
strong voice, and more on the side of accountability when the will of
the government predominates.
Quality systems, albeit in a number of different forms (quality
assurance, accreditation, licensing, and so on), are today an intrusive
reality of every national higher system and will remain an important
regulation and steering tool for many governments. It is possible to
detect that trust in institutions has not been restored, as there is an
apparent movement from quality assessment as a tool for improvement
to accreditation as a tool for customer protection and accountability.
At the same time a number of new developments are visible, which
use different approaches to quality with diverse consequences both for
agencies and institutions. In this chapter we intend to analyse these
recent developments.

Trust

The level of trust between higher education institutions and state and
society plays an important role in determining the major characteristics
of quality assessment systems.
Neave (1994, 1996) proposed a law of anticipated results to explain
the behaviour of institutions that try to guess what will be required
by government policy and act in anticipation, making it difficult to
determine whether change is actually imposed from the top down. The
conduct of institutions frequently gives ‘the impression of autonomous
institutional action to what is in fact an institutional reaction to actual
or anticipated external forces, directives or events’ (Meek, 2002, p. 250).
However, the success of institutions depends strongly on the level of
trust they enjoy from the government.
In the Netherlands the strong trust between government and insti-
tutions allowed Dutch universities to claim for themselves the major
responsibility for quality, convincing the Ministry that they should con-
trol the quality assurance system through an independent agency, the
VSNU. Neave (1994, p. 127) presents the case of the Flemish universities
as ‘a remarkable example of the Law of Anticipated Results’. Flemish
universities anticipated the government’s movements in quality by
initiating a quality assessment system in collaboration with the Dutch
Where Are Quality Frontiers Moving to? 15

VSNU. This resulted in entrusting the VLIR (Vlaamse Interuniversitaire


Raad) with the responsibility for quality assessment. Portuguese univer-
sities followed the same road, and the Evaluation of Higher Education
Act (Law 38/94 of 21 November) entrusted the ownership of the quality
agency to ‘representative institutions’, similar to the Dutch VSNU
(Amaral and Rosa, 2004).
On the contrary, in the UK, where the government had largely with-
drawn its trust in institutions (Trow, 1996), the pre-emptive strike of
the British Committee of Vice-Chancellors and Principals in establish-
ing the Academic Audit Unit in 1990 failed, and did not forestall the
implementation of Higher Education Funding Councils granted with
‘primary status’, that is, with powers of financial allocation and regula-
tory enforcement (Neave, 1992).
In Europe at present many signs point to declining trust of govern-
ments and society in higher education systems, their institutions and
their professionals regarding their capacity to ensure adequate standards
of quality. Schwarz and Westerheijden (2004) analysed changes in quality
assurance systems and detected a clear movement towards accreditation
schemes, with all recently implemented quality systems being based on
accreditation rather than on quality assessment. In the Netherlands,
Flanders and Portugal, the national quality assurance agencies were dis-
missed under accusations of excessive dependence on higher education
institutions, being replaced with ‘independent’ accrediting agencies
(Amaral, 2007). The remit of the Danish agency EVA was reduced to
assessments of short and medium cycle programmes and a new Agency –
ACE Denmark – was established with the task of accreditation and
approval of all university programmes. In Finland there was also a shift
towards more detailed programme level accreditation. The specialised
literature also reveals a general decline in the level of trust in public
institutions and professionals. Long regarded as disinterested guardians
of knowledge and producers of new knowledge, academics are facing a
gradual proletarisation of their professional status (Halsey, 1992), and
the academy no longer enjoys the prestige on which higher education
can build a successful claim to political autonomy (Scott, 1989).
Under new public management, students become customers or
clients, and in most higher education systems quality assurance and
accountability measures have been put in place to ensure that academic
provision meets their needs and expectations. The transformation of
students into clients also transformed academics from disinterested
professionals into service providers. As such, academics are no longer
recognised by their almost monastic qualities, instead becoming venal
16 Quality Assurance in Higher Education

contractors whose activities should be inspected. When the academic


becomes a contractor his inherent qualities of altruism and probity
are no longer taken for granted and his capacity for self-regulation is
questioned for having an interest in institutional decisions. This opens
the way for professional managers and a reinforced presence of external
stakeholders in governance bodies.

Markets as instruments of public policy

Governments are increasingly using market-like mechanisms as instru-


ments of public policy (Dill et al., 2004). For a market to work efficiently,
producers and consumers need to have ‘perfect information’. However,
in many cases, the relevant information is not available (imperfect
information) or the producer has much more detailed knowledge than
the consumer (asymmetric information). To make rational choices
consumers need good knowledge of the price and characteristics of
goods and services to be purchased and of the market conditions. To
increase consumer information, governments therefore use tools such
as licensing, accreditation, sets of performance indicators and the public
disclosure of the results of quality assessment (Smith, 2000).
In many countries governments have been experimenting with market-
type mechanisms to force higher education institutions to compete for
students, funds and research money. In Europe, the Bologna Declaration
states that ‘redefining the nature and content of academic programmes
is transforming what were once state monopolies over academic degrees
into competitive international markets’ (Dill et al., 2004, p. 330).
The emergence of the market in higher education gives legitimacy to
state intervention to avoid the negative effects of market competition
and to create conditions for their efficient operation, which includes
the need for consumer information. The information problem is
particularly acute in the case of higher education, which has three
simultaneous characteristics. Firstly, it is an experience good, mean-
ing that its relevant characteristics can only be effectively assessed
by consumption, as it is only after a student starts attending a study
programme that he or she gets a real idea of what has been purchased
in terms of quality, professors and the general value of the educational
experience. Secondly, it is a rare purchase, as in most cases a student
enrols in a single study programme throughout his or her professional
life and cannot derive market experience from frequent purchases.
Finally, opting-out costs are high, as it is in general rather expensive
to change to a different study programme or institution (Dill and Soo,
Where Are Quality Frontiers Moving to? 17

2004). The simultaneous presence of these three characteristics makes


a strong case for government intervention to protect consumers by
providing information (Smith, 2000), which justifies the increasing role
of quality assessment in market regulation.
Dill argues that from the strict point of view of ‘rational economic
choice’, ‘students lack sufficient information about the quality of aca-
demic institutions or programs to make discriminating choices’ (1997,
p. 180) as what they need is a measure of the prospective future earn-
ings provided by alternative academic programmes and not ‘(…) peer
review evaluation of teaching processes, nor subjective judgements of
the quality of a curriculum’ (ibid.).
However, even if this kind of data were available, many students
(or their families) would not use it, which questions the validity of the
hypothesis of rational economic choice (Tavares et al., 2006). Although
students are free to choose any study programme, choices are made – as
Bourdieu (1989) argued – using criteria learnt and inherited at the social
level. Students usually associate choice with accessibility (Gottfredson,
1981), which relates to obstacles or opportunities in a social or economic
context that affect their chances of gaining a particular job. That is
how Bourdieu and Passeron (1982) claimed that the educational system
reproduces the social structure.
David Dill refers to the problem of immature consumers, which
provides the ground for ‘the implementation of quasi-markets, rather
than consumer-oriented markets, for the distribution of academic
programs’ (Dill, 1997, p. 181). The state or a state agency, acting on
behalf of the final consumers, can get a better bargain from providers
as it has a much stronger purchasing power than any individual client,
a logic that is reinforced when (immature) clients do not make rational
choices. The state is no longer a provider of higher education but
assumes a role as principal, representing the interests of consumers
by making contracts with competing institutions, creating a quasi-
market in which independent providers compete against each other in
an internal market (Le Grand and Bartlett, 1993). When quasi-markets are
implemented, government agencies engaged in approving contracts in
the name of consumers face the classical principal–agent dilemma: ‘How
the principal [government] can best motivate the agent [institutions]
to perform as the principal would prefer, taking into account the
difficulties [the principal faces] in monitoring the agent’s activities’
(Sappington, 1991, p. 45, cited in Dill and Soo, 2004, p. 68).
Delegation problems can be analysed using the principal–agent
theory (Kassim and Menon, 2002). Delegation problems become more
18 Quality Assurance in Higher Education

acute when agents have considerable autonomy, as is the case with


universities. Autonomous institutions competing in a market may
decide either to uphold the primacy of public good or to promote their
own ‘private good’, in the latter case not performing as the principal
would prefer.
This may lead to a contradiction in neo-liberal policies. On
the one hand institutions should be allowed to operate freely under the
rules of market competition. On the other hand, governments ensure
institutions behave as governments want them to, by introducing
an increasing number of compliance mechanisms, including perfor-
mance indicators and measures of academic quality, under the guise of
quality assessment or accreditation, transforming quality assurance into
a compliance tool.

Recent developments

I will now focus on a number of recent developments that will also be


taken up in greater detail in other chapters of this volume.

The European Commission, ministers and rankings


The Bologna process has been a very important tool for change. The
Ministers of Education who met in Bergen in 2005 gave their blessing
to the European Standards and Guidelines for Quality Assurance (ESG),
drafted by the ENQA (European Association for Quality Assurance in
Higher Education) (2005), in cooperation and consultation with its
member agencies and the other members of the ‘E4 Group’ (its mem-
bers are ENQA representing European accreditation agencies, EUA
representing universities, EURASHE representing polytechnics and ESU
representing European student associations). In 2007, the Ministers of
Education met in London to establish the European Quality Assurance
Register for Higher Education (EQAR) based on a proposal drafted by the
E4 (ENQA, 2007). More recently (28 and 29 April 2009) the Ministers of
Education held another conference in Belgium. The final communiqué
from this meeting states:

there are several current initiatives designed to develop mechanisms


for providing more detailed information about higher education insti-
tutions across the EHEA to make their diversity more transparent. …
These transparency tools … should be based on comparable data and
adequate indicators to describe the diverse profiles of higher education
institutions and their programmes. (Leuven communiqué, 2009)
Where Are Quality Frontiers Moving to? 19

At Leuven, student representatives saw danger in the communiqué as


opening the way to a ranking system and proposed the inclusion of a
phrase that would make rankings unacceptable. However, they failed in
this, being abandoned by the representatives of higher education organisa-
tions such as EUA (European University Association), EURASHE (European
Association of Institutions in Higher Education), the Coimbra group and
other partners such as ENQA. The Commission not only requested a
report on the possibility of establishing a classification of European
universities (van Vught, 2009), but also funded two projects to analyse
the implementation of a multidimensional ranking system (U-Map and
U-Multirank projects). European Ministers and the Community are appar-
ently determined to implement a fast and lean system to classify or rank
universities, having realised that using quality systems will not produce a
quick and clear answer. CHEPS provides additional explanation:

a logical next step for Europe with respect to transparency measures


is the development of a classification of higher education institu-
tions. … In this phase we will evaluate and fine-tune the dimensions
and their indicators and bring them into line with other relevant
indicator initiatives; finalise a working on-line classification tool;
articulate this with the classification tool operated by the Carnegie
Foundation; develop a final organizational model for the implemen-
tation of the classification. (CHEPS, 2011)

The design of the ranking system intends to follow the ‘Berlin Principles
on the ranking of higher education institutions’, which stress the need
to take into account ‘the linguistic, cultural, economic and historical
contexts of the educational systems being ranked’. The approach is
to compare only institutions that are similar in their missions and
structures. The project is linked to the idea of a European classifica-
tion (‘mapping’) of higher education institutions. The feasibility study
includes focused rankings on particular aspects of higher education at
institutional level (e.g., internationalisation and regional engagement),
and two field-based rankings for business and engineering programmes.
As Kaiser and Jongbloed explain:

the classification is an instrument for mapping the European higher


education landscape. … In contrast to the U-Map classification pro-
ject, U-Multirank is a ranking project. … U-Multirank pays attention
mostly to output (performance) and impact (outcomes). (Kaiser and
Jongbloed, 2010, p. 2)
20 Quality Assurance in Higher Education

The convergence of the Bologna process and the Lisbon strategy is


giving the European Commission increasing influence over European
higher education (Amaral and Neave, 2009a), despite the fact that
there is a weak legal basis for Community intervention since education
has always been considered an area of national sensitivity (Gornitzka,
2009). The activities and policies of the European Commission are
apparently aimed at building a stratified EAHE against the traditional
view still prevailing in many European countries that national uni-
versities are all equal, which recalls the Legal Homogeneity Principle.
A more complete description and analysis of U-Map and U-Multirank is
presented in Chapter 3.

The student experience and the evaluation of learning outcomes


Douglas Bennett (2001) considers that the only valid approach to
assessing the quality of education is based on the value added, meaning
what is added to students’ capabilities or knowledge as a consequence
of their education at a particular college or university, or more simply,
the difference a higher education institution makes in their education.
However, as Bennett recognises, assessing value added is difficult for a
number of reasons such as its many dimensions, differences between
institutions and time for consequences of education to unfold fully,
and complexity and cost. Alternatively, a second-best and more feasible
strategy is to assess outcomes by evaluating the skills and capabilities
students have acquired as they graduate (or shortly after) or the recogni-
tion they gain in further competition.
OECD (2008) produced a report providing an international perspective
on current practices in the assessment of standardised learning outcomes
in higher education, drawing on examples from a number of countries.
The outcomes assessed include both cognitive and non-cognitive
ones. The report tried to answer four questions: What is being assessed?
How are these outcomes being assessed? Who is each instrument going
to assess? Why is the assessment being applied?
Cognitive learning outcomes ‘range from domain-specific knowledge
to the most general of reasoning and problem-solving skills’ (Shalveson
and Hunag, 2003, p. 13). The OECD considers a division of cognitive learn-
ing outcomes into knowledge outcomes involving the ‘remembering,
either by recognition or recall, of ideas, materials or phenomena’
(Bloom and Krathwohl, 1956, p. 62) and skills outcomes, both divided
into generic and domain-specific.
A non-cognitive learning outcome refers to changes in beliefs or the
development of certain values (Ewell, 2005). Studies on non-cognitive
Where Are Quality Frontiers Moving to? 21

outcomes often focus on the presence of certain theorised stages of


identity development (Pascarella and Terenzini, 2005) and may be
developed through both classroom instruction and out-of-class activi-
ties organised by HEIs to supplement the curriculum. However, the
definition of desirable non-cognitive outcomes is controversial and is
subject to cultural contexts and is not always shared by all stakeholders.
Some studies suggest that non-cognitive outcomes are related to social
maturation, generational effects (Pascarella and Terenzini, 2005) or
‘significant life events’ (Glenn in Pascarella and Terenzini, 2005, p. 272).
Following discussions at the 2006 OECD Ministerial Conference in
Athens, OECD launched a new programme, ‘Assessment of Higher
Education Learning Outcomes’ (AHELO). In its presentation leaflet
OECD proposes to develop ‘an assessment that compares learning out-
comes in an universally sound manner, regardless of culture, language,
differing educational systems and university missions’ while consider-
ing that ‘current university rankings may do more harm than good
because they largely ignore a key measure of quality, namely what goes
on in the seminar rooms and lecture theatres’.
For OECD, AHELO is a ground-breaking initiative to assess learning
outcomes on an international scale by creating measures valid for all
cultures and languages (OECD, 2009a). OECD initially proposed that a
large number of higher education students in more than ten different
countries take part in a feasibility study to determine the bounds of this
ambitious project, aimed at the possible creation of a full-scale AHELO
upon its completion. The initial plan was that the ‘feasibility study’ would
consist of four ‘strands’: three assessments to measure learning outcomes
in terms of generic skills and discipline-related skills (in engineering and
economics) and a fourth value-added strand, which was research based.
The measurement of generic skills (e.g. analytical reasoning, critical
thinking, problem-solving, the practical application of theory, ease in
written communication, leadership ability, the ability to work in a group
and so on) would be based on an adaptation of the Collegiate Learning
Assessment (CLA) developed in the US. For the discipline-based strands
the study would concentrate on the approach used in the Tuning Process
for Engineering and Economics. The fourth value-added strand would
not be measured, as it would not be compatible with the time frame of
the study. Therefore, ‘the feasibility study would only explore different
methodologies, concepts and tools to identify promising ways of measur-
ing the value-added component of education’ (OECD, 2009a, p. 10).
OECD considers the importance of context, while recognising the
difficulty of context measurement. In the proposed model, student
22 Quality Assurance in Higher Education

learning outcomes ‘are a joint product of input conditions and the


environment within which learning takes place’ (OECD, 2009b, p. 4).
Inputs may include student characteristics related to learning, such
as gender and socio-economic status (Pascarella and Terenzini, 1991,
2005), and environmental characteristics, such as the setting in which
learning takes place, curricula and pedagogies, and student learning
behaviours (Pascarella and Terenzini, 1991, 2005; Kuh, 2008).
However AHELO has faced funding difficulties due to the present eco-
nomic crisis, and the feasibility stage has thus assumed a more modest
scope than initially proposed. Apparently there were also criticisms of
the timeframe, which was considered to be too short. The results of the
feasibility study were recently made available and a public discussion
was held in March 2013 at the OECD. So far it has not yet been decided
that AHELO can move into a full-scale phase as it may be considered too
complex and expensive to survive its feasibility phase. Further detailed
discussions of AHELO are presented in Chapter 4.
Learning outcomes are also present in the European Standards and
Guidelines (ESG). These state that quality assurance programmes and
awards for internal quality assurance within higher education institu-
tions are expected to include ‘development and publication of explicit
intended learning outcomes’. Student assessment procedures should
‘be designed to measure the achievement of the intended learning
outcomes and other programme objectives’ (ENQA, 2005, p. 17).

Regaining trust and the quality enhancement approach


Massification, the emergence of markets as instruments of public regula-
tion and the influence of new public management have resulted in loss
of trust in institutions and academics, reinforcing accountability over
improvement in quality processes. Within this context a new approach
seems to be emerging: the quality enhancement approach. Quality
enhancement may be seen as an attempt by universities to regain trust
by restating that quality is their major responsibility, the role of out-
side agencies being limited to quality audits. A report from the Higher
Education Academy (2008) considers that the increasing relevance of
quality enhancement is promoted ‘to an extent, by contextual changes
in for example, the concept of “student,” the relationship of the student
to the HE provision and the perception of the role of the HE sector in
society’ (Higher Education Academy, 2008, p. 6).
However, quality enhancement (QE) still remains a poorly-defined
concept. The HEA report, although presenting QAA’s definition as ‘the
process of taking deliberate steps at institutional level to improve the
Where Are Quality Frontiers Moving to? 23

quality of learning opportunities’, recognises that institutions are still


looking for their own definition, as emerged from several institutional
replies to the questionnaire used to collect information for the report.
Even without a widely accepted definition of QE, there are a number
of common patterns to institutional approaches. From the HEA report
and a paper by Filippakou and Tapper (2011), some ideas about the
characteristics of QE can be seen, taken from institutional replies. It is
accepted that QE will repatriate responsibility for the quality of learn-
ing process to within the institution and external vigilance will rely
on institutional audits rather than on more intrusive forms of quality
assessment, such as programme level accreditation. Institutions agree
with the idea that they have main responsibility for the quality of edu-
cation, and quality enhancement can only be successfully implemented
‘in the context of a flexible, negotiated evaluative model’ (Filippakou
and Tapper, 2008, p. 92) and should be ‘by definition non-mandatory,
and should be shaped by the actual participants in the teaching and
learning process’ (ibid., p. 94).
Filippakou and Tapper (2008) question whether QE is effectively a
new discourse leading to a different interpretation of the higher educa-
tion quality agenda or if it is merely ‘part of the developing discourse of
quality assurance with its potential for change threatened’ by the way
it may be implemented (ibid., p. 91). For Filippakou and Tapper ‘assur-
ance and enhancement are concepts with distinctive meanings, with
enhancement promising more than assurance, and although apparently
giving greater space to academics, also making more demands of them’
(ibid., p. 92).
Sursock argues, ‘The quality assurance debate … is really about power.
It is a question of how quality is defined and by whom’, which ‘can
induce distortions that are not necessarily in the best interests of stu-
dents, graduates, employers or society at large’ (Sursock, 2002, p. 2).
And Neave states that ‘evaluation systems are not independent of what
a government’s intentions are, nor from what its policy is’ (Neave, 2004,
p. 224). Filippakou and Tapper (2008) argue that QAA is developing a
strategy to reassert its own authority using its own definition of what
quality enhancement means and how it is to be promoted. They ques-
tion ‘who has the power to determine the meaning of key concepts,
how they are put into effect … what the policy outcomes should be’
(Filippakou and Tapper, 2008, p. 93). For them, a reason for concern
lies in the idea that quality enhancement should be promoted using
the model of ‘good practice’, which is considered ‘another function of
the new public management model of governance’ (ibid., p. 94). And
24 Quality Assurance in Higher Education

institutions show concern that external intervention, namely under the


guise of QAA led audits may damage or destroy quality enhancement
and innovation: ‘External scrutiny could hinder QE, especially when
QE is so rigidly defined’ (Higher Education Academy, 2008, p. 29).
A detailed discussion of the Quality Enhancement approach is presented
in Chapter 7.

Risk management
Risk management is a process imported from business. It aims to iden-
tify, assess and prioritise risks in order to create plans to minimise or
even to eliminate the impact of negative events. Risk management is
widely used by actuarial societies and more recently by government and
the public sector too.
The Quality Enhancement Framework (QEF) was introduced in
Scotland in 2003. This emphasises ‘the simple and powerful idea that
the purpose of quality systems in higher education is to improve stu-
dent experiences and, consequently, their learning’ (QAA Scotland,
2008, p. 1). It is interesting to note that the QEF introduces the notion
of risk: enhancement is the result of change and innovation that will
frequently involve risk. Institutions are expected to manage this risk
in a way that provides reasonable safeguards for current students. The
review process will continue to recognise and support effective risk
management and adopt a supportive and not punitive role in this con-
text (QAA Scotland, 2008, p. 4).
The 2005 Quality Risk Management Report (Raban et al., 2005, p. 5)
states that as early as 1998 there was a reference to academic risk and
its management:

Delivery of higher education programmes is becoming increasingly


diverse and complex, not least through the rapid growth of col-
laborative arrangements … Complexity adds risk, and risk must be
managed. (QAA, 1998)

In 2000 the Higher Education Funding Council for England (HEFCE)


required higher education institutions to demonstrate ‘compliance with
the principles of corporate governance, including the application of the
effective risk management techniques, by July 31st 2003’ (Raban et al.,
2005, p. 4). The HEFCE had already proposed that institutions able to dem-
onstrate they were following best practices would face a lighter touch audit
(Higher Education: Easing the Burden, July 2002, § 6.1) and in 2001 HEFCE
published a good practice guide on risk management (HEFCE, 2001).
Where Are Quality Frontiers Moving to? 25

More recently the White Paper on Higher Education made public in


the UK (BIS, 2011) introduces the concept of risk management with a
different emphasis:

a genuinely risk-based approach, focusing QAA effort where it will


have most impact and giving students power to hold universities to
account … in which the frequency – and perhaps need – for a full,
scheduled institutional review will depend on an objective assess-
ment of a basket of data, monitored continually but at arm’s length.
(BIS, 2011, p. 37)

Although the White Paper states that ‘all higher education providers
must continue to be part of a single assurance framework’ (BIS, 2011,
p. 37) it proposes that the risk of each institution must be assessed and
that the level of risk will determine the frequency of QAA’s reviews.
Institutions with low risk – with a demonstrable record of high-quality
provision – will be subject to less frequent full institutional reviews than
new providers or institutions offering lower quality of provision. At the
same time, the document proposes the implementation of a set of ad
hoc triggers that will determine the intervention of QAA for conducting
an immediate partial or full review whenever there are concerns about
compliance with quality standards.
The White Paper (BIS, 2011) raises serious concerns. On the one hand
it is possible that trust in institutions is in danger of being sacrificed to
the aim of appeasing students who were recently asked to pay a larger
contribution to the costs of education. On the other hand the risk-based
approach raises concerns that the new system will no longer address
quality enhancement for the whole system. Instead of quality enhance-
ment, robust quality assurance procedures will be focused on detecting
and eliminating those cases where quality standards are at risk. That
is why both ‘trust – building on staff’s professional values and their
aspirations – and dialogic accountability are themselves preconditions
for enhancement, risk assessment and the effective local management
of risk’ (Raban et al., 2005, p. 50). See Chapters 5 and 6 for a more
detailed discussion of risk management.

Other themes for debate

Despite what has been mentioned, the shifting frontier of quality assess-
ment is not exclusive to the UK. Similar dynamics are no less evident
in Europe, Latin America and the United States. They open up a wider
26 Quality Assurance in Higher Education

perspective and also offer the opportunity for cross regional comparison
and to take stock of the views and opinions of different stakeholders
(agencies, academics and students) about changes taking place in the
quality domain. Developments in the United States deserve careful
scrutiny in particular, not least because of its long history of quality
processes dating from the nineteenth century.
Survey answers from academics regarding their perceptions of the effects
of internal quality management show that they support the idea that
quality systems should promote quality improvement and innovation in
higher education (Rosa, Sarrico and Amaral, 2011). And the promotion
of innovation and flexibility and reliance on internal quality systems is
compatible with the QE approach. However, changes in the governance of
higher education institutions under the influence of NPM have strongly
decreased or even eliminated collegiality and made academics more like
employees and less like professionals. This enforced weaker dedication
of academics to governance may very well have a negative effect on
collegial time dedicated to assuring and improving academic standards.
Students also play an important role in the developments of European
higher education, namely through the activities of the European
Students Union. The courage of students to criticise openly the Leuven
Communiqué while the representatives of higher education institutions
kept silent could be seen as an example of the capacity of the younger
generations to shape and improve European policies.
The fast development of information technology may also be a fac-
tor in quality assurance processes. One example is the emergence of
MOOCs (Massive Open Online Courses) that are a form of ‘direct-to-
students’ education, ‘removing faculty from the heart of the student
experience’ (Eaton, 2012) and relying on the students’ initiative to get
what they can from their learning experience. However, so far MOOCs
only offer students ‘badges’ certifying their mastery of skills in some areas
and there are only very limited cases of awarding credits for MOOCs
(one example is Colorado State University-Global campus). At present,
none of the US accreditation agencies accredit elements of courses, and
they still consider that faculty have a very important role in students’
educational experiences. CHEA has recently opened a discussion on the
possible accreditation of MOOCs as a tool for judging their quality.

Conclusions

Harvey and Newton (2006) argue that traditional quality assurance


systems do not address the core elements of academic endeavour,
Where Are Quality Frontiers Moving to? 27

knowledge creation and student learning. Members of the higher


education community consider that quality assurance has nothing to
do with quality enhancement and may even damage quality. At present,
there are several developments taking place in different contexts. This
book sets out to analyse these different developments: multidimen-
sional rankings promoted at the European level, the OECD AHELO
project, the quality enhancement approach and the risk management
approach. Unfortunately, some of these initiatives seem to take us even
further away from the core elements of academic endeavour. What the
future will be is just another guess. Options for the future of quality
systems are not separated from considerations about the type of higher
education system the appropriate authorities wish to foster. Recent
developments reveal a trend for replacing quality assessment agencies
owned by universities or by organisations representing universities with
independent accreditation agencies (the Netherlands, Flanders and
Portugal), while agencies based on quality audit have been replaced by
agencies based on accreditation (e.g. Denmark and Norway).
At the European level, Brussels’ objective apparently puts more
emphasis on competition and the creation of a stratified European
Higher Education Area than on cooperation and quality improve-
ment. There is increasing emphasis on market mechanisms, new
public management and competition, accompanied by a loss of trust
in institutions. This reinforces the possibility that a highly stratified
European Higher Education Area will emerge, following developments
of the Bologna process and supported by the Commission (which does
not trust academics and their market aloofness) and with the help of
Ministers (who see the virtues of cost saving and easily digestible infor-
mation). This will produce a ranking system of European universities,
albeit under the more palatable guise of U-Map classifications, multidi-
mensional global university rankings or focused institutional rankings,
field-based rankings or even the official nickname of multidimensional
transparency tools.
An interesting alternative might emerge from OECD’s decision to move
forward with the implementation of a system for measuring learning
outcomes that is much closer to notions of the quality of the students’
learning experience than to ranking or classification systems. However,
the recent results of the AHELO feasibility study do not guarantee that
a full-scale AHELO project will be implemented in the near future, or
if it will be implemented at all. Such a system still faces unresolved
methodological problems and may prove to be both too complex
and too expensive. The AHELO project has an irresistible strategic
28 Quality Assurance in Higher Education

value for OECD as it is a very important instrument for reinforcing


the influence of the Château de la Muette over higher education.
Indeed much of OECD’s influence is based on opinion forming, which
is a clear expression of ‘the capacity of an international organization to
initiate and influence national discourses’ (Martens et al., 2004, p. 2).
And evidence suggests that the ability of OECD to shape and influence
opinion on education is at least partly based on the regular publication
of cross-national and comparative educational statistics and indicators,
one of the most important being the Performance Indicators of School
Achievement (PISA) (Amaral and Neave, 2009b).
A third development is to be seen in the quality enhancement
approach, which is more palatable for academics. This corresponds to the
restoration of public trust in higher education institutions, a most chal-
lenging objective for university leaders. Quality enhancement will offer
academics an alternative that is compatible with academic norms and
values, creating a bridge with quality, provided that intrusive external
interventions under the guise of rigid audit systems are not implemented.
Finally, risk management is being introduced in some quality systems.
It allows for a more flexible, effective and less expensive approach,
although no less questionable for its focusing on detecting and elimi-
nating those cases where quality standards are at risk while ignoring
quality enhancement of the whole system.
We hope that the discussions and detailed analyses provided in the
next chapters will help to form a clearer picture of recent developments
in quality assurance.

References
Amaral, A. and Rosa, M.J. (2004) ‘Portugal: Professional and Academic
Accreditation – The Impossible Marriage?’, in S. Schwarz and D. Westerheijden
(eds), Accreditation and Evaluation in the European Higher Education Area
(Dordrecht: Kluwer Academic Press), pp. 127–57.
Amaral, A. (2007) ‘From Quality Assurance to Accreditation – A Satirical View’,
in J. Enders and F. van Vught (eds), Towards a Cartography of Higher Education
Policy Change (UNITISK, Czech Republic), pp. 79–86.
Amaral, A. and Neave, G. (2009a) ‘On Bologna, Weasels and Creeping
Competence’, in A. Amaral, G. Neave, C. Musselin and P. Maassen (eds),
European Integration and the Governance of Higher Education and Research
(Dordrecht: Springer), pp. 271–89.
Amaral, A. and Neave, G. (2009b) ‘The OECD and Its Influence in Higher
Education: A critical revision’, in A. Maldonado and R. Bassett (eds),
International Organizations and Higher Education Policy: Thinking Globally, Acting
Locally? (London and New York: Routledge), pp. 82–98.
Where Are Quality Frontiers Moving to? 29

Bennett, D. (2001) ‘Assessing Quality in Higher Education’, Liberal Education,


87(2), 1–4.
BIS – Department for Business Innovation & Skills (2011) Higher Education.
Students at the Heart of the System (London: The Stationery Office Limited).
Bloom, B. and Krathwohl, D. (1956) Taxonomy of Educational Objectives: The
Classification of Educational Goals, by a Committee of College and University
Examiners. Handbook I: Cognitive Domain (New York: Longmans, Green).
Bourdieu, P. (1989). La noblesse D’état – Grandes écoles et esprit de corps (Paris: Les
Éditions de Minuit).
Bourdieu, P. and Passeron, J.C. (1982) A Reprodução (Rio de Janeiro: Francisco
Alves).
CHEPS (2011), http://www.utwente.nl/mb/cheps/research/projects/ceihe/
(accessed 12 January 2012).
Dill, D. (1997) ‘Higher Education Markets and Public Policy’, Higher Education
Policy, 10(3/4), 167–85.
Dill, D. and Soo, M. (2004) ‘Transparency and Quality in Higher Education
Markets’, in P. Teixeira, B. Jongbloed, D. Dill and A. Amaral (eds), Markets in
Higher Education: Rhetoric or Reality? (Dordrecht: Kluwer Academic Publishers),
pp. 61–85.
Dill, D., Teixeira, P., Jongbloed, B. and Amaral, A. (2004) ‘Conclusion’, in
P. Teixeira, B. Jongbloed, D. Dill and A. Amaral (eds), Markets in Higher Education:
Rhetoric or Reality? (Dordrecht: Kluwer Academic Publishers), pp. 327–52.
Eaton, J. (2012) ‘MOOCs and Accreditation: Focus on the Quality of “Direct-
to-Students” Education’, Inside Accreditation with the President of CHEA, 9(1),
November 7.
ENQA (2005) Standards and Guidelines for Quality Assurance in the European Higher
Education Area (Helsinki: ENQA).
ENQA (2007) Report to the London Conference of Ministers on a European Register of
Quality Assurance Agencies (Helsinki: ENQA Occasional Paper 13).
Ewell, P.T. (2005) ‘Applying Learning Outcomes Concepts to Higher Education:
An Overview, prepared for the University Grants Committee’, http://www.khu.
hk/caut/seminar/download/OBA_1st_report.pdf (accessed 22 September 2013).
Filippakou, O. and Tapper, T. (2011) ‘Quality Assurance and Quality Enhancement
in Higher Education: Contested Territories?’ Higher Education Quarterly, 62(1/2),
pp. 84–100.
Gornitzka, Å. (2009) ‘Networking Administration in Areas of National Sensitivity:
The Commission and European Higher Education’, in A. Amaral, G. Neave,
C. Musselin and P. Maassen (eds), European Integration and the Governance of
Higher Education and Research (Dordrecht: Springer), pp. 109–131.
Gottfredson, L. (1981) ‘Circumscription and Compromise: A Developmental Theory
of Occupational Aspirations’, Journal of Counseling Psychology, 28(6), 545–79.
Halsey, A.H. (1992) Decline of Donnish Dominion: The British Academic Professions
in the Twentieth Century (Oxford: Clarendon Press).
Harvey, L. and Newton, J. (2006) ‘Transforming Quality Evaluation: Moving
On’, in D. Westerheijden, B. Stensaker and M.J. Rosa (eds), Quality Assurance
in Higher Education: Trends in Regulation, Translation and Transformation
(Dordrecht: Springer), pp. 225–45.
Higher Education Academy (2008) Quality Enhancement and Assurance: A Changing
Picture? (York: Higher Education Academy).
30 Quality Assurance in Higher Education

HEFCE (2001) Risk Management: A Guide to Good Practice for Higher Education
Institutions (London: HEFCE).
Kaiser, F. and Jongbloed, B. (2010) ‘New transparency instruments for European
higher education: The U-Map and the U-Multirank projects’, paper presented
to the 2010 ENID Conference, 8–11 September 2010.
Kassim, H. and Menon, A. (2002) ‘The Principal-Agent Approach and the Study
of the European Union: A Provisional Assessment’, Working Paper Series.
Birmingham: European Research Institute, University of Birmingham.
Kuh, G.D. (2008) High-Impact Educational Practices: What They Are, Who Has
Access to Them, and Why They Matter (Washington, DC: Association of
American Colleges and Universities).
Le Grand, J. and Bartlett, W. (1993) Quasi Markets and Social Policy (London:
Macmillan Press).
Leuven Communiqué (2009), http://www.ond.vlaanderen.be/hogeronderwijs/
bologna/conference/documents/leuven_louvain-la-neuve_communiqué_
april_2009.pdf (accessed 22 September 2013).
Martens, K., Balzer, C., Sackmann, R. and Weymann, A. (2004) Comparing
Governance of International Organisations – The EU, the OECD and
Educational Policy, TransState Working Papers No.7, Sfb597 ‘Staatlichkeit im
Wandel (Transformations of the State)’, Bremen.
Meek, L. (2002) ‘On the Road to Mediocrity? Governance and Management of
Australian Higher Education in the Market Place’, in A. Amaral, G. A. Jones and
B. Karseth (eds), Governing Higher Education: National Perspectives on Institutional
Governance (Dordrecht: Kluwer Academic Publishers), pp. 235–60.
Neave, G. (1992) ‘On Bodies Vile and Bodies Beautiful: The Role of “Buffer”
Organisations’, Higher Education Policy, 5(3), 10–11.
Neave, G. (1994) ‘The Policies of Quality: Development in Higher Education in
Western Europe 1992–1994’, European Journal of Education, 29(2), 115–34.
Neave, G. (1996). ‘Homogenization, Integration and Convergence: The Cheshire
Cats of Higher Education Analysis’, in V.L. Meek, L. Goedegebuure, O. Kivinen,
and R. Rinne (eds), The Mockers and the Mocked: Comparative Perspectives
on Differentiation, Convergence and Diversity in Higher Education (London:
Pergamon Press), pp. 26–41.
Neave, G. (1998). ‘The Evaluative State Reconsidered’, European Journal of
Education, 33(3), 265–84.
Neave, G. (2004) ‘The Temple and Its Guardians: An Excursion into the Rhetoric
of Evaluating Higher Education’, The Journal of Finance and Management in
Colleges and Universities, 1, 211–27.
OECD (2008) Assessment of Learning Outcomes in Higher Education: A Comparative
Review of Selected Practices (Paris: OECD).
OECD (2009a) Assessment of Higher Education Learning Outcomes (Paris: OECD).
OECD (2009b) Analytical Framework for the Contextual Dimension of the AHELO
Feasibility Study (Paris: OECD).
Pascarella, E.T. and Terenzini, P.T. (1991) How College Affects Students (San
Francisco: Jossey-Bass).
Pascarella, E.T. and Terenzini, P.T. (2005) How College Affects Students, Volume 2
(San Francisco: Jossey-Bass).
QAA (1998) ‘The Way Ahead’, Higher Quality, 4, October.
Where Are Quality Frontiers Moving to? 31

QAA Scotland (2008) Enhanced Led Institutional Review Handbook: Scotland,


2nd edition (Mansfield: Scotland).
QAA (2009) ‘Guidelines for risk management (revised May 2009)’, http://www.
qaa.ac.uk/AboutUs/corporate/Policies/Documents/Risk%20management%20
guidelines.pdf (accessed 22 September 2013).
QAA (2011) ‘Institutional review of higher education institutions in England
and Northern Ireland: Operational description’, http://www.qaa.ac.uk/
Publications/InformationAndGuidance/Documents/ireni-operational-d.pdf
(accessed 22 September 2013).
Raban, C., Gower, B., Martin, J., Stoney, C., Timms, D., Tinsley, R. and
Turner, E. (2005) ‘Risk management report’, http://www.edgehill.ac.uk/aqdu/
files/2012/08/QualityRiskManagementReport.pdf (accessed 22 September
2013).
Rosa, M.J., Sarrico, C.S., and Amaral, A. (2011) The perceptions of Portuguese
academics on the purposes of quality assessment, paper presented to the
Annual Conference of CHER – Consortium of Higher Education Researchers,
Reykjavik, Iceland, 23–25 June.
Sappington, D.E.M. (1991) ‘Incentives in Principal-Agent Relationship’, Journal of
Economic Perspectives, 5(2), 45–66.
Schwarz, S. and Westerheijden, D. (2004) Accreditation and Evaluation in the
European Higher Education Area (Dordrecht: Kluwer Academic Press).
Scott, P. (1989) ‘The Power of Ideas’, in C. Ball and H. Eggins (eds), Higher
Education into the 1990s: New Dimensions (Buckingham: Society for Research
into Higher Education and Open University Press), pp. 7–16.
Shalveson, R.J. and Huang, L. (2003) ‘Responding Responsibly to the Frenzy to
Assess Learning in Higher Education’, Change, 35(1), 11–19.
Smith, R.L. (2000) ‘When Competition Is Not Enough: Consumer Protection’,
Australian Economic Papers, 39(4), 408–25.
Sursock, A. (2002) ‘Reflection from the higher education institutions’ point of
view: Accreditation and quality culture’, paper presented at Working on the
European Dimension of Quality: International conference on accreditation
and quality assurance, 12–13 March 2002, Amsterdam.
Tavares, D., Lopes, O., Justino, E. and Amaral, A. (2006) ‘Students’ preferences
and needs in Portuguese higher education’, paper presented at the 2006 EAIR
annual conference, Rome.
Trow, M. (1996) ‘Trust, Markets and Accountability in Higher Education:
A Comparative Perspective’, Higher Education Policy, 9(4), 309–24.
Van Vught, F. (2009) Mapping the Higher Education Landscape (Dordrecht:
Springer).
3
Quality Enhancement: A New
Step in a Risky Business? A Few
Adumbrations on Its Prospect for
Higher Education in Europe
Guy Neave

Introduction

What are the prospects and benefits, advantages and promise that the
application of Quality Enhancement and the advent of Risk Analysis
may both bring with them as new and significant additions to the
instrumentality of the Evaluative State? Like most issues that have to
do with weighing up of Quality, and with the conditions and criteria
associated with valorising knowledge, the implications that follow from
the way the happy descriptor is operationalised and the implications
that in turn, flow from the process of operationalisation are, to say the
least, delicate. They are delicate, given the economic situation most of
our higher education systems currently confront. This situation they
have in varying degrees had to face over the past four years or more.
Even the most unbridled of economists can give no clear statement as
to how long the situation is likely to last.
One of the more salient features of the Evaluative State is the weight it
places on ‘policy as action’ as opposed to ‘policy as reflection’. Absence
of speed, failure to fall in with the expeditive ethic, we have been told
these 20 years past until we are all blue in the face, is a manifest evi-
dence of inefficiency, of resistance to change, of obduracy in the face of
the beneficent workings of the Prince and his efforts to harness higher
education to speeding up the transition of our nations towards the
Knowledge Economy. Concentration on other than the immediate and
the short term is not always the essence of our business. And policy as
reflection tends sometimes to be seen as ‘swinging the lead’, as derelic-
tion of duty and an implied unwillingness wholeheartedly to embrace
the responsibilities the Prince wishes us to assume.
32
A New Step in a Risky Business? 33

Yet, this is precisely our task: to step aside from ‘policy as action’.
Instead, we have the opportunity to examine what the main construct
for operationalising and developing quality, efficiency and enterprise
(Neave, 1988) – namely, the Evaluative State – has achieved. And within
that broader framework, to weigh up the significance that Quality
Enhancement and risk factoring hold out for shaping it further.
There are many ways we can move on this. I, for my part, will move
in from a long-term perspective to these issues by taking an historical
approach. Historians are sometimes useful for holding up such a mirror.
But this can also be a risky business. As with Caliban in The Tempest, the
holding up of mirrors tends to enrage those who see such reflections as
caricatures. Still, if you want to know where you ought to go, it is as well
to know how you came from where you have come. It is sometimes a
consoling experience.

The epidemiological temptation

Precisely because policy in the Evaluative State is increasingly time


enforced, reflection is a necessary step and never more so than today.
Without it, the higher education community sensu lato has little means
of resisting what is sometimes called the ‘Epidemiological Temptation’,
that is to rush to do something because others are doing it. Or to rush
because, if we do not do it, the accusation can be levelled at us by those
who do, that they are at the cutting edge of ‘policy as action’ and there-
fore ‘keeping up with “the competition”’. We, by the same token, are
not. Without reflection, however, the danger is very real that ‘policy
as action’ simply degenerates into ‘policy as psittacisme’ (parrot fever).
I will set out the development of the Evaluative State in Europe in
terms of four broadly chronological stages. These are:

Stage 1 Origins, scope and purpose;


Stage 2 Refining procedures, re-defining ownership;
Stage 3 Quality Enhancement: an evolutive dimension; and
Stage 4 Higher Education as a risky business.

This tour d’horizon interprets both Quality Enhancement and risk


factoring as successive stages that place new interpretations and open
the way to new insights on the Evaluative State. I will argue that Quality
Enhancement and risk factoring represent Stages 3 and 4. I will also
set out the specificities and identifying features of each stage. Prior to
this and as a general background, I will attend to the more significant
34 Quality Assurance in Higher Education

driving forces that came together to form the Evaluative State. I will
examine variations in the aims and purposes that different nations laid
upon their edition of the Evaluative State.

The dangers of over-focusing

The Evaluative State is the product of economic crisis. A close scrutiny


of the timing of the early moves towards the Evaluative State, the
setting up of agencies and procedures, norms and criteria by which
quality was defined, ascertained and identified, were the outcome of
earlier economic difficulty. The press for greater efficiency and squeezed
higher education expenditure were amongst the most powerful driv-
ers at the onset of the Evaluative State in the late 1980s, above all in
the UK and the Netherlands. However, economic crisis was not the
whole picture. Even so, the point may be made that in such systems as
France, Spain and Portugal towards the end of the last century reining
in public expenditure also came eventually to play its part in shaping
the Evaluative State in those countries as well. However, in both the
UK and the Netherlands, New Public Management and the Neo Liberal
construct were both central to re-defining higher education’s purpose –
to respond to ‘the market’. They also shaped the instrumentality and
function of the agencies set up to verify higher education’s course. In
other words, higher education’s strategic purpose henceforth lay in
generating those skills and knowledge held to sustain the nation’s
competitive viability in a region destined for closer integration on
the one hand, and as a condition for successful transition towards a
‘knowledge-based’ economy on the other. In this, higher education had
a dual mandate: first, to put in place inner reform, and second, to sus-
tain that broader transition at nation state level and hopefully uphold
the nation’s well-being within the new conditions of the knowledge
economy (Heitor and Horta, 2011).
Put succinctly, the focus on quality and the verification of
performance – the central purpose of the Evaluative State – served to
‘steer’ higher learning towards what two of our colleagues term the
‘quasi-economic university’ (Teixeira and Dill, 2011) and another ‘the
entrepreneurial university’ (Clark, 1998, 2003).

The scope of our debate

The origins of the Evaluative State were coterminous with economic


crisis. Indeed, steering by quality and performance sought explicitly to
A New Step in a Risky Business? 35

improve both adaptability of higher education to the market and the


speed of response by individual establishments. From this, it is safe to
conclude that, at the very minimum, higher education today, regard-
less of the particular variant of the Evaluative State set up by individual
nations, is more responsive by far than it was to the vagaries and for-
tunes of that market (Neave, 2012a, p. 207). To believe otherwise would
be a dangerous exercise in self-delusion as well as being a thoroughly
pessimistic interpretation of what has so far been achieved.

Exchange, borrowing and their assumptions

When we debate the implications of the procedures that have been


brought together under the rubric of Quality Enhancement, their
phasing and their source of origin are important. They can also serve
to distinguish stages in the dynamic rise of the Evaluative State. As a
descriptor, Quality Enhancement appears so far to be a development
rooted in the English-speaking world. Nor is it a coincidence that what,
from a European setting, may be seen as a third phase in the develop-
ment of the Evaluative State should hail from these parts. Nor is it
surprising that the nations longest wedded to the principle of ‘market
driven’ higher education should also be those with promising solu-
tions. They have had time to develop them, whereas those that want
to weigh up the importance of these measures seek to save time – once
again an example of the ‘expeditive ethic’ that lies at the heart of the
Evaluative State.
Take-over or borrowing from others has long been a feature of
comparative education. In the United States, this was the key con-
tribution made by Abraham Flexner, one of the founding fathers of
Comparative Higher Education, during the 1930s. Today, take-over
and borrowing have acquired a new intensity and a new status. They
fuel both globalisation and, nearer to our own doorsteps, are one of the
more powerful elements in the adventure Europe has been following
this quarter century past – namely, the creation of a continent-
wide, multinational higher education system. This is what both the
European Research Area and the European Higher Education Area are
both about (Neave and Amaral, 2011, pp. 2–4; Neave and Veiga, in
press). Take-over and borrowing legitimate the process of globalisation.
They underline the importance of the European agenda by advancing
it. They demonstrate empirically the advantages one system may
derive from closely observing what others who hold themselves as
leaders are doing.
36 Quality Assurance in Higher Education

Higher education policy as complementarity

Clearly, what was once a marginal activity in education – the comparative


dimension – now fulfils a central function. Its centrality in turn reflects
a shift in education policy from comparison to complementarity,
from scholarly curiosity to practical issues posed by the movement
of students, researchers and staff across systems and between them.
Complementarity – that is, the capacity through exchange and colla-
boration of the apparent strength of one system of higher education
to assist in remedying the shortcomings of its neighbours and partners
(Neave and Amaral, 2011, p. 6) – makes a number of assumptions about
the nature of what is exchanged, just as certain assumptions are also
made by the designers of the instrument or procedure. As with any
exchange, the value of the ‘gift’ differs for he who contemplates accepting
it just as much as it does for he who offers it, regardless of what Portia
said to Shylock.

The blessings of giving and receiving

In theory, three sets of assumptions are made. The first, often argued
by international agencies, is that individual techniques, procedures and
practices are themselves ‘value neutral’ on the grounds of their objec-
tive or quantifiable nature. A variation on this line of argument holds
that such items must necessarily be introduced because they show a
proven efficiency in fulfilling the successful attainment of objectives in
one system that a second seeks to attain. It is more blessed to receive
than to give. Not surprisingly, there is a negative version to this cal-
culus, namely, that if one is not blessed by the receiving, one is most
certainly cursed if one rejects it. This latter line of persuasion is often
brought to bear in urging individual nations further down the path of
the Bologna and Lisbon agendas. It is known in the trade as ‘naming
and shaming’ (Gornitzka, 2007). Finally, there are the assumptions the
donor implicitly makes. These assumptions are not greatly dissimilar
from those made by the receiver, with one additional and very con-
siderable one taken for granted. Precisely because the ‘donor’ holds
himself to be successful, he presumes that in part such success may be
attributed to the practices, instrumentality and ways of proceeding he
has devised and which are ‘tested and proven’. They have made him
primus – or secundus – inter pares. So the same happy outcome – or, the
avoidance of continued national ignominy – will follow as a result of
others following his example. This is determinism of a very high order.
A New Step in a Risky Business? 37

What the donor tends to play down in this higher education version of
‘la mission civilisatrice’ is that the practices he offers are themselves the
outcome of negotiations that rested on cultural, political, historic and
for that matter economic norms that underlie and permeate his own
higher education system. No less important, it is precisely these norms
and the margin of manoeuvre they permit that shape the way decisions
were reached in the first place (Neave, 2012b, p. 158). They are not
always the same elsewhere.

Quality eternal and the Evaluative State

It is a truism of the most elementary kind to say that the quality of


teaching and learning has always been the European university’s con-
stant concern from the earliest of times. Even in the Middle Ages, who
had the right to found and establish universities, who awarded the
status of ‘recognised teachers’ and who conferred on individual uni-
versities the privilege of awarding recognised degrees was the subject
of bitter acrimony and mutual disregard between Pope, Holy Roman
Emperor, Princes and later nations (Nardi, 1992, pp. 77–105). So it
remains today. Quality lies at the operational heart of the Evaluative
State. For that reason, it shapes a very particular relationship between
higher education, government and society. Technical procedures, indi-
cators and instruments of judgment, objective though they are in the
way they are applied, also serve a broader purpose. Procedures uphold
existing patterns of authority. They may equally set up new ones. Either
way, how a procedure is made to operate and the ends to which it drives
set the outer bounds to the relationship between university and the
collectivity – be it economic or social.
Once we go beyond Quality Assurance and see Quality Enhancement
as a sub-set, or as an ‘add on’, new interpretations emerge. What may
be interpreted in terms of procedure and practice as convergence – the
shibboleth of both the Bologna Process and the Lisbon Agenda – takes
on an unwonted variety. Ex unum plures rather than e pluribus unum. It
does not necessarily follow that taking on similar procedures means we
have similar purposes in mind. Still less does it mean that the political
values or economic priorities that drive the Evaluative State forward
in one country have similar weight in another. We do well to give
some attention to the macro-economic and political circumstances
that accompany the unveiling of Quality Enhancement. This aspect
is no less important in trying to weigh up the possible impact and
consequence in a system of higher education where cultures, whether
38 Quality Assurance in Higher Education

political or academic, do not necessarily share the same vision or show


the same degree of consensus that Quality Enhancement apparently
commands in its countries of origin.

Part two
Stage 1: origins, scope and purpose

When we examine the early moves towards the Evaluative State, the
observer is struck by the marked differences in rationale that drove it
forward as well as the differences in strategic scope and purpose. The
construction of the Evaluative State mirrored the quest for quality, effi-
ciency and enterprise in higher education (Neave, 1988, pp. 7–23). True,
no other European State went as far as Portugal did by nailing the flag
of quality assessment to the mast of higher education and including it
in the Constitution of 1976 under the heading of article 76 paragraph 2
(Amaral and Carvalho, 2008). Thus, arguably Portugal’s drive towards
the Evaluative State, which incidentally began at Porto almost 20 years
ago, built on a degree of formal continuity largely absent in France,
Britain and the Netherlands. Until the promulgation of the Portuguese
Higher Education Guideline Law of 2007, which reorganised Portugal’s
universities and polytechnics around the tenets of Neo-Liberalism – that
is, competitive deflation, market flexibility and de-regulation (Gayon,
2012) – shaping the Evaluative State sought more to improve estab-
lished patterns of authority in higher education. The beginnings of the
Portuguese Evaluative State were far more ‘Root and Branch’ in nature.
Likewise with the earliest example of the drive towards the Evaluative
State – in France. The first step was the creation, in 1984, of the
Comité National d’Evaluation – an independent body reporting not to
the Minister in charge of Higher Education, but to the President of
the Republic. The CNE was the launchpad for the French Evaluative
State and introduced systematic external review of higher education
(Staropoli, 1987, pp. 127–31). Whilst its purpose was very clearly to
‘enhance quality’ its objective did not, as current English and Scottish
initiatives propose, focus on those aspects, which elsewhere fall under
the rubric of Hochschuldidaktik. Rather, the French interpretation of
enhancing quality focused on the ‘delivery’ of new courses to meet
spiralling student numbers and greater diversity of demand in a sys-
tem that was rapidly moving from mass to universal higher education.
Initial priorities sought to enhance the quality of provision by speeding
up the rate of delivery.
A New Step in a Risky Business? 39

Similar procedures, different aims


In their early moves towards the Evaluative State, both France and
Portugal shared a broadly similar concern: to increase knowledge about
higher education, and make it publically available and up to date.
In Portugal, the Rectors’ Conference, which set the pace, saw quality
assessment from two perspectives: first, to consolidate the pedagogical
autonomy that the government had conferred on public sector universi-
ties in 1988; second, to improve the efficiency of national policy-making
by providing national authorities with that detailed information of
institutional development and achievement, hitherto lacking (Neave
and Teixeira, 2012, p. 26). Quality evaluation was thus a lever for
improving ministry efficiency (Neave, 2012a, p. 129).
In France, institutional review and evaluation aimed at a different
target group and had a different purpose: to provide empirical and
grounded examples of how other establishments were currently meet-
ing changes in student demand. Public reviews of quality were more
a gentle prodding of the Academic Estate to be more adventuresome,
an Academic Estate that was hopefully to be encouraged by knowing
what other French universities, grandes écoles and instituts universitaires
de technologie were doing already. The prototype version of the French
Evaluative State sought to encourage greater boldness at the insti-
tutional level to speed up the pace of adjustment at the sector level
(Neave, 2012b, p. 194). The essential purpose was to open a ‘lateral’ flow
of public information between institutes of higher education.
Neither in France nor in Portugal were the first steps towards the
Evaluative State drawn from arguments based on Neo Liberalism or New
Public Management. On the contrary, French legislation that launched
what was to become the Evaluative State dismissed both privatisation
and the notion of higher education as a ‘consumer good’. Rather, higher
education was represented as it had long been, as a ‘public service’ under
the responsibility of the state. The legislator was excruciatingly careful
to retain ‘the established rights’ (droits acquis) of both Academic and
Student Estates, as well as the basic principle that underpinned internal
governance – participant democracy (Neave, 2012a, pp. 70–73). That
neither Neo Liberal doctrine nor New Public Management figured in
the theoretical underpinning of either the Portuguese or French model
of the Evaluative State in its first stage is significant. The Evaluative
State displayed clear national differences and priorites. Neither were
Portugal and France the only examples of alternatives to the Anglo-
Saxon strain of the Evaluative State. Neither Spain (Neave, 2012a,
pp. 105–18) nor Italy (Veiga, Amaral and Mendes, 2008, pp. 53–67) drew
40 Quality Assurance in Higher Education

on Neo Liberalism or New Public Management as the central constructs


beneath the Evaluative State, as had been the case in the UK and the
Netherlands.

Stage 2: refining procedures, defining ownership

The second stage in shaping the Evaluative State, both in Europe and
Great Britain, involved two dimensions: the internal refinement of pro-
cedures and the definition of ownership. Viewed schematically, each
passed through two stages. During Stage 1, the development of internal
procedures entailed a detailed and systematic review of individual HEIs,
a painstaking, time-consuming and costly procedure. In France, institu-
tional review was extended to cross system reviews of displinary areas
and higher education’s performance in particular regions. In retrospect,
Stage 1 was an exercise in mapping out, identifying and validating a
limited number of indicators that were both discriminatory – in the
precise meaning of that term – and sensitive. The burden of Stage 2
was to set in place benchmarks or standards of expected performance.
Certain systems, such as Sweden, also saw proposals to ‘lighten the
review cycle’ and convert it into ‘alert system’ for identifying and fully
examining only those establishments that showed obvious difficulty
(Högskolverket, 2005).

Ownership: from ‘honest broker’ to Evaluative State


Although individual cases will certainly show difference in chronology,
the question of ownership or the administrative locus of the Evaluative
State likewise marks the transition from Stage 1 to Stage 2. In both
Britain and Portugal, the first steps on the road towards the formally
organied Evaluative State were made by university leaders. In Britain
this was the Committee of Vice Chancellors and Principals, in the
shape of the Jarratt Report of 1985. In Portugal it was the Conference
of Portuguese Rectors (Neave, 2012a, pp. 131–35). In France, definition
of ownership followed a different route. Ownership was vested in the
Comité National d’Evaluation, and its independence underpinned by the
Comité, which reported on a two-yearly basis, not to the Ministry, but
to the Head of State (Neave, 1996, pp. 61–88). In all three instances,
their initial profile ressembled that of an ‘honest broker’ rather than
as Principal in a Principal / Agent relationship (Neave, 2012a, p. 195).
In Portugal, the honest broker model built explicitly upon pedagogic
autonomy, which five years previously had been conferred on public
sector universities.
A New Step in a Risky Business? 41

Key to Stage 2 in the saga of the Evaluative State, was the ‘relocation of
ownership’. This took place relatively speedily in the UK, but was more
protracted in France and Portugal. It saw the placing of responsibility
for refining assessment procedures into Agencies of Public Purpose: the
British Quality Assurance Agency in 1997, and the Portuguese Conselho
Nacional de Avalacao do Ensino Superior in 1998. In France, the gradual
ousting of the Comite National d’Evaluation from its original status of
relative independence was cautious and incremental. Nevertheless, its
merger in 2007 with the Association pour l’Evaluation de la Recherche
et de l’Enseignement Superieur (AERES) effectively moved it back into the
national process of policy formation rather than standing as honest
broker to one side of it (Neave, 2012a, p. 198).
In both France and Portugal, redefinition of ownership and its
administrative location assumed the weight of law: in France with the
Law of 10 August 2007, which reorganised the ‘new university’, and
in Portugal, exactly one month later, with the passing of the Higher
Education Guideline Law. From the standpoint of the adepts of Neo
Liberalism and more explicitly, New Public Management, here were
very satisfactory examples of ‘sinners come to repentence’. Legislation
moved these two systems firmly on to Stage 2 in the development of
the Evaluative State.

The significance of the Evaluative State


I have spent a little time developing the thesis that different nations
attach different priorities and purposes to Quality Assurance. In effect,
the Evaluative State is far from monolithic. On the contrary, the fact
that one nation devises a generically similar procedure to another does
not necessarily entail the way it is perceived or the purpose to which it
is put. There remains one final question before Quality Enhancement
and the proposal to include ‘risk management’ as further procedures to
underpin quality are addressed: what has the Evaluative State achieved
so far? Where does it fit within that central task of government and
national administration that the late Burton R. Clark described as
‘system coordination’ (Clark, 1983) and is today presented in terms of
‘system steering’?
The answer lies in the question, namely the shift of system oversight
from coordination to steering. In Continental Europe – but not in the
United Kingdom – coordination was primarily grounded in elaborate
legal codification, regulation and oversight exercised by a central – or, in
the case of Germany, provincial and federal – ministry. The ‘State control’
model of coordination rested on the ‘principle of legal homogeneity’.
42 Quality Assurance in Higher Education

As the term implies, legal homogeneity applied uniformly across a


particular higher education sector or institutional type (Neave and van
Vught, 1991). It rested on a number of assumptions. Prime amongst
them were:

1. that legal intent was reflected in institutional reality;


2. that change and adaptation at institutional level took place as part
of an internal organic process that proceeded from within the twin
‘freedoms’ of teaching and learning (Wissenschaft und ihre Lehre
sind frei. Lehr und Lernfreiheit);
3. that major system adjustment which, by definition demanded legis-
lation, was both exceptional and worked out over a period of ten to
fifteen years (Neave, 2012a, p. 13).

Few of these assumptions have survived the advent of the Evaluative


State. There is, however, a more nuanced interpretation: namely, that
the Evaluative State, with quality assessment and quality assurance as
its prime operational instruments, did not reject these assumptions,
but converted them into hypotheses that required regular empirical
verification. In short, legal intent was no longer construed as necessarily
eliciting immediate institutional ‘take up’. Nor could the burden of
proof for institutional response be left unattended once higher educa-
tion’s mission had been redefined as meeting the immediate needs of
the market and upholding the nation’s competitive stance.

The Evaluative State: what it has done


As the prime vehicle for measuring quality, what has the Evaluative
State brought about? This is a necessary backdrop to our discussing
Quality Enhancement. From an historical perspective and one focused
on mainland Europe, the impact of the Evaluative State falls across three
domains. First, its procedures have revealed – and in high detail – an
institutional dynamic that, whilst doubtless present before, could not
adequately be taken fully and rapidly into account in systems of higher
education wholly dependent on the workings of legal homogeneity.
Understanding higher education’s dynamics no longer rests on the
presumptions explicit in legal codification. Second, irrespective of the
particular variant that individual systems of higher education have set
in place, the Evaluative State turns around the notion of conditionality
(Chevaillier, 2004). Like the process of evaluation on which conditi-
tionality is based, conditionality is bounded by standardised indicators
of expected performance, mediated across different sectors of higher
A New Step in a Risky Business? 43

education (Santiago, Carvalho, Amaral and Meek, 2006, pp. 139–48);


that is, benchmarking. Third, the Evaluative State, as well as standardis-
ing measures of performance also standardises the period over which
they are assessed (Neave and Teixeira, 2012, p. 39) in the form of the
review cycle.

A supplementary instrument: the state evaluative and evolutive


From a long-term perspective, the impact of the Evaluative State is this:
it has supplemented an historic mode of administrative control over
higher education – legal codification – with a second system of over-
sight. This second system turns around time-restricted performance and
its verification in which the overall goals of higher education are driven
primarily by forces external to higher education – competitive demand,
the provision of services to the knowledge economy and the creation of
those ‘skills’ deemed necessary for the well-being of that economy. Last
but not least, is the generation of appropriate knowledge as the basic
capital in that self-same construct of the knowledge economy. To the
historic principle of legal homogeneity, the Evaluative State added an
immensely powerful instrumentality in parallel with legal codification.
It is, moreover, an instrumentality that may be made to have its own
internal dynamic by setting and, if necessary, resetting or adding to the
range of activities for which accounts may be demanded and rendered.
In short, the Evaluative State has created a second form of homogeneity.
This second form is both evaluative and evolutive.

Stage 3: Quality Enhancement: an evolutive dimension

Quality Enhancement may be seen as quintessential to that evolutive


process. Like ET, the extra terrestrial, however, it is not alone. Though-
out this tour d’horizon I have given particular weight to the ‘evolutionary
dynamic’, in terms of ownership, the drive towards benchmarking and,
in one or two instances, the development of ‘lighter’ evaluation tech-
niques. It is not coincidental that the concept of Quality Enhancement
should emerge from English-speaking systems. Clearly, Quality
Enhancement – defined as ‘deliberate steps at institutional level to
improve the quality of learning opportunities’ (QAA, 2008 para 4.4.18,
p. 19; emphasis added) represents a form of accounting to the Student
Estate and as such is newly targetted. There is good reason for such a
step. England and Wales have been amongst the earliest to move ‘cost
shifting’ from the public purse to the individual’s pocket, irrespective of
whether it is a case of ‘cash on the barrelhead’ or a life-time levy. Whilst
44 Quality Assurance in Higher Education

the same principle is now recognised by most systems of higher education


in mainland Europe today, few if any public sector establishments have
undergone so massive an increase in student fees as has been the case in
England from 2005 onwards, let alone the hike in 2012. This is not to
say that the current crisis, above all in Southern Europe, may not force
governments further in the same direction.

Issues posed by Quality Enhancement comparatively viewed


Precisely because Quality Assurance is firmly embedded in the UK,
Quality Enhancement may be seen as a logical outgrowth and follow
up to it. In Portugal, however, the legitimacy that comes from ‘the
embedded practice’, whilst rapidly acquiring weight and substance, is
less mature. In its present configuration, the Evaluative State in Portugal
has been in place for four years at the most. The potential Quality
Enhancement has as a policy option depends on how far what it sets
out to achieve – or to rectify – has been achieved in its pays d’origine.
It also depends on how far it may be made to serve similar goals by
those envisaging its importation.
Here again, as I have stressed throughout this chapter, it is important
to distinguish clearly between Quality Enhancement as a technical
procedure and the circumstances and ‘domestic values’ into which it
is inserted (Neave and Teixeira, 2012, pp. 49–52). Implementation will
be particularly important for our discussion. So too will be the impact
it is hoped the procedures will have as against those they do have. This
latter aspect is crucial. ‘Time-based’ procedures, which are an identify-
ing characteristic of the Evaluative State, set great store on the rapid
conversion of intent into established practice.
There are cogent reasons for drawing a careful line between procedure
as policy and the immediate context into which it is launched. Such
considerations are, potentially, highly significant in Portugal. They
call for great care to be taken so the changes in structure, process and
responsibility that may follow from taking up Quality Enhancement
are not associated with measures of austerity, above all by the Academic
Estate. Certainly, ‘knowledge management’ figures as l’air du temps in
higher education generally and in science policy very especially (Heitor
and Hector, 2011, pp. 179–226). Whilst knowledge management may
not necessarily invoke the delicate issue of authority re-located, it poses
the issue of the de facto balance of responsibility between Academic
and Administrative Estates. Is knowledge to be managed by ‘knowledge
communities’ or by management ‘professionals’ extending their remit
to manage knowledge as opposed to servicing it?
A New Step in a Risky Business? 45

A recent report from England, which examined how a number of


universities aligned different activities around Quality Enhancement,
noted that no single definition of Quality Enhancement emerged (QAA,
2008, paras 2.9; 4.3.6). Clearly, many paths lead to salvation.

Two theoretical perspectives on Quality Enhancement


From a Portuguese perspective, the current state of Quality Enhancement
in England suggests the blooming of a thousand flowers. It is greatly
encouraging and a clear pointer to what Clark alluded to as ‘the
bottom-heavy nature’ of higher education (Clark, 1983). Such variety
seemingly provides an empirical – though retroactive justification – for
strengthening institutional autonomy which the Higher Education
Guideline Law of August 2007 brought about. There is, however, a fur-
ther interpretation of ‘Quality Enhancement’. That is to see it within
the analytical framework Clark devised in his pioneering study of the
‘entrepreneurial university’ (Clark, 1998). Seen from the perspective of
‘innovative periphery’ and ‘central driving core’, one would be justified
in drawing the conclusion that the ‘innovative periphery’ in British
universities today shows great variety in the ways it displays innovation
and creativity.

From product to process


Quality Enhancement, seen as Stage 3 in the evolution of the Evaluative
State, marks a further step in that broader thrust of bringing into the
public arena that ‘private, implicit knowledge’ that higher learning has
of itself. This step is noteworthy. It appears to enlarge the remit of the
Evaluative State from the identification and verification of product –
or outcome – into the domain of process. The Evaluative State is today
concerned not just with what is done but also with how it is done.
Quality Enhancement is, then, a further and excellent illustration of
the Evaluative State’s evolutionary dynamic. From the standpoint of an
earlier paradigm, by adding Quality Enhancement to Quality Assurance,
the Evaluative State moves firmly into what was once deemed ‘the
private life’ of higher education (Trow, 1976, pp. 113–27), namely teach-
ing, regardless of whether it was routine or innovative, inspiring or
tedious. From the perspective of Principal–Agent theory, the Evaluative
State qua Principal is clearly engaged in redefining both its remit and,
in so doing, extending the information it requires from higher learning
qua its Agent (Dill, 1997).
Others will give a more focused account of this development. The
question I want to pose, however, has to do with ‘the inherent dynamic’,
46 Quality Assurance in Higher Education

that has driven the Evaluative State forward over the past two decades.
What light does this sustained dynamic shed on Quality Enhancement?
Is the way Quality Enhancement is currently construed necessarily the
last word to be had on it? This is highly unlikely, above all in times of
unprecedented economic crisis. There are two very good reasons for tak-
ing this view. The first stems from the intelligence the Evaluative State
now possesses about the immediate and present condition of higher
education. The second is a derivative function of the first: namely, the
intelligence available about the state of higher education may also be
used to weigh up and assess the appropriateness of national policies that
have brought it to this condition (Neave, 2012a, pp. 138–39). Whether
Quality Enhancement can be seen as a remedy to previous oversights,
others are better placed than I to give an answer.

Stage 4: Higher Education as a risky business

In Stages 2 and 3 the Evaluative State forged new instruments for plot-
ting performance, output, institutional achievement and cost, and in
certain instances tracked student transition from higher education to
work. Such a battery of instruments serves various agendas: account-
ability checking ‘the reaction time’ to national priorities and providing
a back-channel for indirect ‘steering’.
New instruments do not just bring new insights and new norms of
institutional performance. They also bring with them new perceptions
of higher education as well as embedding them in higher education’s
discourses. They provide a new account and thus new explanations for
institutional behaviour. As a potential instrumentality and as a new
lease of life – or prospect of death – in the groves of academe, risk taking
opens a new and hitherto unbroached possibility: that of institutional
failure.
It is unkind to point out that in their intent to open up higher edu-
cation completely to ‘the market’, government and its advisers also
admit that failure is the price that may have to be paid. Unkind though
it is, higher education, like the Scout, must ‘be prepared’. The cynic
will point out that it is less devastating for institutions to fail in a fully
market driven system, than to have them fail in a system partially sup-
ported by public finance. Institutional failure is after all proof of the
purgative effects of competition. But whether the government can be
made responsible for the débâcle is far less evident in a higher education
system fully driven by market forces than when higher education is
financed from public pennies.
A New Step in a Risky Business? 47

The salient feature of risk taking is not that it stands as yet another
example of grafting techniques and a dead vocabulary, forged in the
corporate sector, onto higher learning and research. Once risk taking as
a technique and as an instrument are injected into higher education,
institutional failure no longer reflects the inadequacies of public policy. It
reflects, rather, the incompetence of the individual university, its leader-
ship, its teaching staff, its ‘goods and chattels, ox, ass, man servant, maid
servant and all that in it is’. If the English government’s avowed intent
to proceed to a fully market driven system is taken in conjunction with
risk taking as an institutional responsibility extended to the academic
domain, from a broader strategic perspective the juxtaposition takes on
all the dimensions of a ‘damage limitation exercise’. If some institutions
fail – and it would be exceedingly good to know what the operational
definitions of failure are – others will nevertheless succeed. Responsibility
for failure falls on the individual institution, not on the consequences
of national policy. Thus, risk taking fences off institutional failure from
policy failure. Instead, the responsibility for the situation national policy
creates is ‘offloaded’ onto precisely those individual institutions least
able to deal with the situation that policy has created.
Still, from an historical perspective, risk calculus has immense symbolic
importance. This lies in the final evaporation of that vital optimism that
has driven higher education forward over the past 50 years. Optimism is
now in cold storage for at least the next evaluatory cycle. With the sober
contemplation of failure, we have also to contemplate what in France is
known as ‘la fin des trente glorieuses’. The three ‘golden decades’, from 1950
to 1980, are definitely over. How long the blizzard will last, not even the
most canny economist or hedge fund director will hazard an opinion.
Risk calculus, I would suggest, is redolent with technocratic pessi-
mism. Realistic it might be; unavoidable, even. But by admitting the
possibility of institutional failure, we turn our backs on the 50-year
adventure that drove higher education onward and upward. Whether
risk calculation is another way of using the market to ration higher
education, only time will tell. From now on, higher education is indeed
a risky business.

References
Amaral, A. and Carvalho, T. (2008) Autonomy and Change in Portuguese Higher
Education (Matosinhos: CIPES).
Becher, T. (1989) Academic Tribes and Territories: Intellectual Enquiry and the
Cultures of Disciplines (Milton Keynes: Open University Press).
48 Quality Assurance in Higher Education

Chevaillier, T. (2004) ‘The Changing Rôle of the State in French higher educa-
tion: from Curriculum Control to Accreditation’, in S. Schwartz-Hahn and
D. Westerheijden (eds), Accreditation and Evaluation in the European Higher
Education Area (Dordrecht: Kluwer Academic Publishers), pp. 159–74.
Clark, B.R. (1983) The Higher Education System: Academic Organization in Cross-
National Perspective (Berkeley, Los Angeles and London: University of California
Press).
Clark, B.R. (1998) Creating Entrepreneurial Universities: Organizational Pathways of
Transformation (Oxford: Elsevier for IAU Press).
Clark, B.R. (2003) Sustaining Change in Universities: Continuities in Case Studies and
Concepts (Milton Keynes: Open University Press for SRHE).
Dill, D.D. (1997) ‘Focusing Institutional Mission to Provide Coherence
and Integration’, in M. Peterson, D.D. Dill and L. Mets (eds), Planning
and Management for a Changing Environment (San Francisco: Jossey-Bass),
pp. 171–90.
Gayon, V. (2012) ‘Le château de La Muette: enquête sur une citadelle du con-
formisme intellectuel’, Le Monde Diplomatique, July.
Gornitzka, Å. (2007) ‘The Lisbon Process: A Supra National Policy Perspective’,
in P. A.M. Maassen and J.P. Olsen (eds), University Dynamics and European
Integration (Dordrecht: Springer Books), pp. 55–178.
Heitor, M. and Horta, H. (2011) ‘Science and Technology in Portugal: From
late Awakening to the Challenge of Knowledge-Integrated Communities’, in
G. Neave and A. Amaral (eds), Higher Education in Portugal A Nation, A Generation
1974–2009 (Dordrecht and Heidelberg: Springer Books), pp. 179–226.
Högskolverket (2005) The Evaluation Activities of the National Agency for Higher
Education in Sweden. Final Report by the International Advisory Board (Stockholm:
Högskolverket).
Jarratt Report (1985) Steering Committee for Efficiency Studies in Universities
(Chairman Sir Alex Jarratt) (London: CVCP).
Moodie, G. and Eustace, R. (1985) Power and Authority in British Universities. The
Development of Higher Education into the 1990s (London: HMSO).
Nardi, P. (1992) ‘Relations with Authority’, in H. de Ridder Simoens (ed.),
A History of the University in Europe, Volume 1, Universities in the Middle Ages
(Cambridge: Cambridge University Press), pp. 280–306.
Neave, G. (1988) ‘On the cultivation of quality, efficiency and enterprise: An
overview of recent trends in higher education in Western Europe 1986–1988’,
European Journal of Education, 23(2/3), 7–23.
Neave, G. (1996a) ‘The Evaluation of the Higher Education System in France’, in
R. Cowen (ed.), World Yearbook of Education 1966: The Evaluation of Systems of
Higher Education (London· Kogan Page), pp. 66–81.
Neave, G. (1996b) ‘Homogenization, Integration and Convergence: The Cheshire
Cats of Higher Education analysis’, in V. Lynn Meek, L. Goedegebuure,
O. Kivinen and R. Rinne (eds), The Mockers and Mocked: Comparative Perspectives
on Differentiation, Convergence and Diversity in Higher Education (Oxford:
Pergamon), pp. 26–41.
Neave, G. (2012a) The Evaluative State, Institutional Autonomy and Re-engineering
Higher Education in Western Europe: The Prince and His Pleasure (Basingstoke and
New York: Palgrave Macmillan).
A New Step in a Risky Business? 49

Neave, G. (2012b) ‘Contrary Imaginations: France, Reform and the California


Master Plan’, in S. Rothblatt (ed.), Clark Kerr’s World of Higher Education Reaches
the 21st Century: Chapters in a Special History (Dordrecht: Springer), pp. 129–61.
Neave, G. and Amaral, A. (2011) ‘On Exceptionalism: The Nation, a Generation
and Higher Education, Portugal 1974–2009’, in G. Neave and A. Amaral (eds),
Higher Education in Portugal: A Nation, a Generation 1974–2009 (Dordrecht:
Springer), pp. 1–48.
Neave, G. and Teixeira, P. (2012) ‘Driving Forward: Alberto Amaral and the
Comparative Dimensions in Portugal’s Higher Education Saga’, in P. Teixeira
and G. Neave (eds), Alberto Amaral. Um Cientista entre a Academica e a Agora
(Porto: University of Porto Press).
Neave, G. and van Vught, F. (1991) Prometheus Bound: The Changing Relationship
between Government and Higher Education in Western Europe (Oxford: Pergamon
Press).
Neave, G. and Veiga, A. (2013) ‘The Bologna Process: Inception, “take up” and
familiarity’, Higher Education, 66, 59–77.
Quality Assurance Agency (2008) Quality Enhancement and Assurance: A Changing
Picture (London: QAA, HEFC England).
Staropoli, A. (1987) ‘The Comité National d’Evaluation: Preliminary results of a
French experiment’, European Journal of Education, 22, 123–131.
Teixeira, P. and Dill, D.D. (2011) ‘The Many Faces of Marketization in Higher
Education’, in P. Teixeira and D.D. Dill (eds), Public Vices, Private Virtues?
Assessing the Effects of Marketization in Higher Education (Rotterdam: Sense
Publishers), pp. vii–xxi.
Trow, M. (1976) ‘The Public and Private Lives of Higher Education’, Daedalus,
104(1), 113–27.
Trow, M. (1996) Trust, Markets and Accountability in Higher Education:
A Comparative Perspective, Research and Occasional Papers, 1.96, Berkeley,
Centre for the Study of Higher Education.
Trow, M. (1998) ‘American Perspectives on British Higher Education under
Thatcher and Major’, Oxford Review of Education, 24(1), 111–29.
Veiga, A., Amaral, A. and Mendes, A. (2008) ‘Implementing Bologna in Southern
European countries: Comparative analysis of some research findings’, Education
for Chemical Engineers, 3(1),47–56. doi: 10.1016/j.ece.2108.01.004.
Veiga, A., Rosa, M.J. and Amaral, A. (2011) ‘Understanding the Impacts of
Quality Assessment: An Exploratory Use of the Cultural Theory’, Quality in
Higher Education, 17(1), 53–67.
Part II
New Challenges,
New Instrumentalities
4
Transparency about
Multidimensional Activities and
Performance: What Can U-Map
and U-Multirank Contribute?
Don F. Westerheijden

Introduction

The question of whether higher education should actively strive for


transparency is all but rhetorical. Of course it should. But why is it
nevertheless such a debatable issue? In this chapter I intend to show the
principles and preliminary contributions of two transparency instru-
ments that respond to much of the criticism of conventional league
table rankings. To do so, I first need to make a brief excursion into the
character of higher education and how that affects ideas of diversity,
process and performance.
All organisations need support, or income, from their environment.
How can they convince their environment to give them the money and
other resources they need for survival? Economics textbooks, based on
archetypical businesses producing goods, have an easy answer to the
question of transparency: businesses can let their products speak for
themselves and do not need to have an immediate need to be transpa-
rent about their production processes. The Dutch say that one should
not watch a butcher making sausages. Higher education, on the other
hand, if it is an ethical ‘business’ and even more if it is a public good,
must undertake efforts to be transparent about what it does in and for
society. Efforts are especially needed because education cannot let its
‘products’ speak for themselves. Higher education’s ‘products’ include
for instance discoveries, inventions and innovations, most of which can
be made visible – mostly in the sciences and in applied fields, but some-
times even in the humanities (Bod, 2010). Some of these ‘products’, for
example publications, can only be enjoyed or fully valued by academic
peers or higher education graduates, which shows the existence of

53
54 Quality Assurance in Higher Education

private benefits to achieving a higher education (next to other obvious


private benefits such as a higher income).
However, some of the ‘products’ only become visible in the long
term; for instance, the competencies of graduates that go beyond
immediately usable skills will only become apparent during their later
lives (for example, their ability to learn, or their leadership qualities or
civic attitudes). Especially with those competencies that appear in the
long term, it may be questionable whether they were ‘caused’ by higher
education or whether they were characteristics of the graduates that
were more or less independent from their attending higher education.
Accordingly, higher education is not a standard, economics textbook
‘inspection good’ whose quality can be determined in advance. Rather it
is an ‘experience good’, the quality of which can only be determined by
the individual user after the fact, or even a ‘credence good’, the quality
of which cannot be assessed even after the fact, because benefits may
also be caused by other factors (Dulleck and Kerschbamer, 2006).
In any case, higher education must be transparent about what it
does, be it for direct survival (resources) or for legitimacy, yet it is not
an easy task to make audiences aware of what higher education does
without falling into the trap of gross oversimplification. Without addi-
tional argumentation, readers will hopefully believe that tools made for
other purposes, such as quality assurance and accreditation, were not
designed primarily to inform multiple audiences. Quality assurance is
first of all a tool for accountability towards governments and secondly
meant to inform higher education institutions about possible quality
enhancements. Accreditation is an information tool, but it is rather
simplistic, as it usually only distinguishes sufficient quality from insuffi-
cient (accredited vs non-accredited status) – without giving easily acces-
sible information about what is actually understood by ‘quality’ in the
accreditation process. Rankings and quality assurance mechanisms are
complementary instruments. Rankings represent an external, quantita-
tive view of institutions from a transparency perspective.
The term ‘audiences’ is used in the plural for a reason: different
stakeholders, like the blind men meeting the elephant, encounter
different parts – different ‘products’ – of higher education. For instance,
prospective students engage with the education function of higher edu-
cation institutions’ separate programmes of study; employers encounter
individual graduates; policy-makers see separate institutions but also
see higher education landscapes in regions and countries. Accordingly,
these categories of stakeholders have different experiences and different
information needs. As a result, the real questions for higher education
Transparency about Multidimensional Activities 55

are: how much transparency, about what, to whom, and for what
stakeholder purposes? It seems tautological that with different stake-
holders, there will not be a single answer to these questions. In practice,
that means that there cannot be a single transparency tool that satisfies
all stakeholders’ information needs.
The transparency tools that we discuss in this chapter, U-Map and
U-Multirank, aim to be multiple tools to different users, packaged in
single ‘engines’. To understand their difference from current ranking
systems, we need to investigate briefly the basic concept of diversity of
higher education first, and remind ourselves of some basics of process
analysis next.

Excursion to concepts

Diversity
Usually, if no further explanation is given, diversity in higher educa-
tion is understood in a vertical sense. ‘Better’ or ‘worse’ is emphasised,
leaving in the dark – more or less – whether this is about prestige,
activities or performance, or a mix of all three at the same time. As
a result of vertical differentiation, rankings are likely to contribute to
wealth inequality and expanding performance gaps among institutions
(Van Vught, 2008). On the one hand, rankings and especially league
tables purport to show inequality among institutions that would be
hard to distinguish otherwise; universities are created equal (legally)
and regulation as well as funding formulae often aim to maintain this
‘legal homogeneity’ (Neave, 1995). On the other hand, rankings try to
create artificial lines, showing that like is not alike, which implies the
danger of becoming institutionalised and thus creating real differences
(Espeland and Sauder, 2007). Similarly, rankings have exacerbated com-
petition for the leading researchers and best younger talent, and are
likely to drive up the price of high-performing researchers and research
groups (Marginson, 2006), making them financially affordable only for
the richest institutions. The conceptual framework focusing on vertical
differentiation therefore creates a ‘Matthew effect’ (Matthew 13:12); that
is, a situation where already strong institutions are able to attract more
resources from students (for example, increase tuition fees), government
agencies (for example, research funding) and third parties, and thereby
strengthen their market position even further. Hazelkorn has shown
that policy-makers and institutional managers react to rankings –
whatever their merits and demerits – in ways conducive to creating a
‘Matthew effect’ (Hazelkorn, 2011).
56 Quality Assurance in Higher Education

The lure of league tables as the most common form of ranking is that
they promise a simple way to show which institutions are the best. For
consumers of information this is enticing indeed, because this form of
information is highly efficient (1 is better than 2, which is better than 3,
and so on) and does not demand a high investment of time and effort
on the users’ side to understand how higher education institutions are
working.
In the way current rankings are created, indicators of research
productivity are in terms of journal articles registered in international
databases (largely ignoring books and other products of research, as well
as most articles not written in English). As a consequence, the existing
‘[g]lobal rankings suggest that there is in fact only one model that can
have global standing: the large comprehensive research university’ (Van
der Wende and Westerheijden, 2009), focused on hard science fields that
adhere to the communication model that focuses on English-language
journal articles. This leads me to the concept of horizontal diversity,
which stresses similarities and differences in institutional missions and
profiles, expressed in, for example, different mixes of disciplines and
study programmes.
Transparency about horizontal diversity aims to group higher educa-
tion institutions by developing nominal distributions among a number
of classes or characteristics without any (intended) order of prefer-
ence. Classifications give descriptive categorisations of characteristics,
intending to focus on the efforts and activities of higher education and
research institutions, according to the criterion of similarity. After all, a
society needs nursing schools as much as medical university faculties for
an operation room team to function successfully. The worldwide model
of classifications, the Carnegie classification of higher education institu-
tions in the United States (www.carnegiefoundation.org/classifications),
was introduced in 1973 as a tool for researchers; over the years, it
turned into a major, authoritative concept for all of the United States
and beyond (McCormick and Zhao, 2005). The success of the Carnegie
classification is due to the fact that the Carnegie Foundation has a
generally accepted authority as an ‘objective’, that is disinterested,
think tank on higher education. This success means that the Carnegie
classification has become understood by the general public even more
as a ranking of vertical diversity, thereby driving American higher edu-
cation institutions to become ‘doctoral granting’ universities if they
wanted to maintain public (and political) prominence. To counter
this perverse effect of its success, the 2005 version of the Carnegie
classification was radically changed to reflect (again) that it wanted
Transparency about Multidimensional Activities 57

to display horizontal diversity, that is, different missions and profiles.


The new classification is also multiplied. Six classifications are found
on the foundation’s website, the five new ones being: Undergraduate
and Graduate Instructional Programme classifications, Enrolment
Profile and Undergraduate Profile classifications, and Size and Setting
classifications. These are organised around three fundamental questions:
what is taught (Undergraduate and Graduate Instructional Programme
classifications), who the students are (Enrolment and Undergradu-
ate profiles), and what the setting is (Size and Setting). The original
Carnegie classification framework – now called the Basic classification –
has also been substantially revised.
The creation of transparency tools that make diversity (vertical and
horizontal) and make different forms of excellence transparent rather
than obscured may be a first step towards creating a more diversified
incentive structure and thus contributing to maintenance of the diver-
sity in higher education worldwide.

Process and performance


Since the introduction of systems theory in social sciences in the 1950s,
it has become commonplace to think of processes as an arrow, link-
ing inputs to a transformation (throughput or process strictu sensu),
resulting in outputs which will have further effects called outcomes.
Feedback loops should assure that learning takes place and that the
process becomes increasingly effective and efficient. The only distinction
needed for our purpose is a rough grouping of organising inputs (for
example, staffing, building labs and lecture halls) and the actual processes
(teaching, research) under the heading of activities, and outputs and
outcomes under the heading of performances.
As a further refinement, let us not forget that the broad process of
education covers a variety of processes: teaching in different disciplines
may be quite different processes (from chemistry to philosophy), or cross
different finalities (professionally oriented vs academically oriented), or
cross different levels (from first-cycle short programmes to postgraduate
seminars), or be for different target groups (masses of young students
vs small groups of post-experience professionals). Similar distinctions
could be made for the research process across disciplines or on the con-
tinuum between blue-skies research and applied fact-finding.
Finally, higher education institutions at present are not only engaged
to different extents in the two primary processes of education and
research, but are also pursuing their ‘third mission’, a term which may
cover different ideas, from contract teaching to developing licensable
58 Quality Assurance in Higher Education

products, to research with a regional impact, and to outreach to the


local community.

Not another critique of current rankings


These conceptual considerations show that the activities and perfor-
mances of higher education institutions are multidimensional, and that
current rankings, which largely favour largely a single dimension (in
fact, classical research), do not value the ensuing institutional horizontal
and vertical diversity.
Rankings have been criticised at length by many authors, including in
the current volume. It may suffice here to point to a publication where
my colleagues and I have analysed and summarised an important part
of that literature (Van Vught and Ziegele, 2012).
Two points of critique that are needed for the following presentation
of U-Map and U-Multirank must be mentioned, however. One concerns
the composition of an overall index, as most current rankings do. They
add together, possibly with different weights, the scores on several indi-
cators. This results in a single score per higher education institution.
Our criticism is that there is no theoretical or conceptual underpinning
of adding up, for instance, research publications and student-to-staff
ratios. The other point concerns intra-institutional diversity. Giving
total scores for a whole higher education institution hides the fact that
education or research across different faculties or schools within the
institution may have very different characteristics and qualities. If an
institution is very good at biology, does that automatically make it good
at languages and literature? We contend that for important stakehold-
ers, such intra-institutional diversity is important – think of prospective
students seeking a study programme fitting their needs in a certain area
of knowledge. In developing U-Map and U-Multirank we endeavoured
to avoid these two methodological flaws.

U-Map
The European U-Map classification has been developed since 2005.
U-Map is a user-driven, multidimensional European classification
instrument that allows all higher education (and research) institutions
to be characterised across six dimensions. By doing so, U-Map allows
for the creation and analysis of specific activity ‘institutional profiles’,
offering ‘pictures’ of the activities of an institution on the various
indicators of all six dimensions. U-Map can be accessed through two
interconnected online tools (a Profile Finder and a Profile Viewer) that
allow stakeholders to analyse the institutional profiles, for example for
Transparency about Multidimensional Activities 59

benchmarking, comparative analysis or institutional strategic profiling.


U-Map has been developed in close cooperation with the designers of
the most recent Carnegie classification.
U-Map’s development was guided by the following design principles:

• Based on empirical data, rather than taking national legal distinctions


for granted;
• Informed by a multi-stakeholder and multidimensional perspective;
• User-driven, in a process including frequent interaction with
multiple stakeholders with a view to developing valid, reliable and
relevant indicators;
• Non-hierarchical, to emphasise horizontal diversity;
• Applicable to all European higher education institutions;
• Based on reliable and verifiable data;
• Parsimonious regarding additional data collection, which neverthe-
less proved necessary in addition to using existing statistics wherever
available and comparable.

U-Map dimensions and indicators


A major consequence of these principles is that U-Map focuses on insti-
tutions’ activities, that is, what they are actually doing, rather than on
official mission statements on the one hand and performances on the
other hand. Yet it must be conceded that in the first rounds, we have
not found enough ‘performance-free’ indicators to keep the distinction
between activities and performances 100% clear.
As we explained in the report on U-Map (Van Vught et al., 2011), the
‘sunburst charts’ (Figure 4.1) give a snapshot of the extent to which the
institutions are engaged in six key dimensions of university activity:
the primary processes of education, research and knowledge exchange,
with indications of focus besides the national level (that is, regional
and/or international engagement), and for which types of students.
Institutional involvement in these dimensions of activity is measured
using a set of 29 indicators, each depicted separately to avoid adding up
incommensurable units.
When pictured side by side, the different aspects of the two institu-
tions’ activity profiles can be compared. U-Map’s online database allows
users to select the institutions to be compared and the activities to be
explored in more depth. The diversity of each institution’s activity is
pictured in its sunburst chart, with its six colours representing the six
dimensions of U-Map (see Table 4.1). Each ‘ray’ represents an indicator;
the length of the ray indicates the extent to which the institution is
60

Regional engagement

Teaching and learning

Research involvement

Knowledge exchange

International orientation

Student profile

Figure 4.1 U-Map ‘sunburst charts’ comparing two higher education institutions

Table 4.1 U-Map’s dimensions and indicators

Teaching and learning profile Student profile

• Expenditure on teaching • Size of student body


• Range of subjects • Distance learning students
• Orientation of degrees • Mature students
• Degree level focus • Part-time students

Research involvement Involvement in knowledge exchange

• Peer reviewed other research • Start-up firms


products • Patent applications filed
• Doctorate production • Cultural activities
• Peer reviewed academic • Income from knowledge exchange
publications activities
• Professional publications
• Expenditure on research

Regional engagement International orientation


• Graduates working in the • Students sent out in international
region exchange programmes
• Importance of local/regional • Foreign degree seeking students
income sources • Incoming students in international
• First year bachelor students exchange programmes
from the region • International academic staff
• Importance of international
sources of income
Transparency about Multidimensional Activities 61

engaged in this activity. For a definition of the indicators, readers are


referred to the report mentioned above. This is also available online at
www.u-map.org.
U-Map has great value in its own right in helping analyse actual
commonalities and differences among higher education institutions.
The tool has reached a level of satisfactory development and stability,
enabling roll out to a larger group of institutions across more countries
in Europe and to a small extent the rest of the world. It is expected that
the U-Map database will become publicly accessible around the time of
publication of the current book.
In addition, U-Map helps find comparable higher education insti-
tutions, which may be used in a ranking exercise to select groups of
comparable higher education institutions, among which a ranking
may be meaningful, in contrast to the hypothetical ranking that tries
to compare a community college with Harvard. From that perspective,
U-Map is a necessary first phase before a ranking like U-Multirank can
be performed: first we must know which higher education institutions
are ‘apples’ rather than ‘oranges’, so that we can compare apples with
apples.

U-Multirank
In a leap beyond U-Map, development has begun on U-Multirank, a
multidimensional ranking of higher education institutions, meant to
be able to service institutions from around the world. A first project, a
proof of principle, ran between 2009 and 2011. The field study included
responses from 115 higher education institutions from around the
world; 29% of which were also represented in the top-500 of the ARWU
ranking (Academic Ranking of World Universities, commonly known as
the ‘Shanghai ranking’) (Van Vught and Ziegele, 2012, p. 137). The sec-
ond two-year phase, which upscaled U-Multirank to around 500 higher
education institutions, began at the end of 2012. If the second phase
proves successful, U-Multirank must stand on its own feet; the European
Commission, which supports the first two phases, does not intend to
get involved in continuous rankings of higher education worldwide.
The main type of question U-Multirank is designed to investigate is
how well higher education institutions are performing their different
tasks. From the activities portrayed in U-Map, we are moving here
to performances, that is, output and impact indicators. Again, in the
current state of development, we have also had to include some pro-
cess indicators. Then again, one person’s process is another person’s
output: for prospective students, for instance, the process of teaching
62 Quality Assurance in Higher Education

is the major service they want from higher education institutions.


Accordingly, from their perspective, teaching is an output and the
quality of teaching is perhaps the most-needed information for them.
However, indicators of performance in terms of student satisfaction –
together with more objective indicators of scale and expenditure on
teaching – are only proxy variables for quality, but they may be the
best that can be done (Van den Broek, de Jong, Hampsink and Sand,
2006; Elton, 2004; Gaberscik, 2010; Vlasceanu and Barrows, 2004;
Westerheijden, 2005).
Proceeding on design principles very similar to those of U-Map,
U-Multirank also results – for the moment – in similar visualisations of
higher education institutions. Performances across five dimensions of
activity (symbolised by different colours) are depicted along separate
indicators.
The visualisation of U-Multirank is under development at the
moment of writing this chapter. In any case it has, as mentioned,
five dimensions: performances are depicted with regard to education,
research, transfer of knowledge, and regional and international aspects.
Missing in comparison with U-Map is the dimension of the student
body; that characteristic could not be turned into indicators of per-
formance. Although the dimensions are largely the same as in U-Map,
the indicators are different, as they are focused on performance rather
than activity. For instance, in the education dimension, employment
information is indicated in U-Multirank instead of the mix of study
programmes offered that is shown in U-Map. In the research dimension,
not only publications and citations but also art exhibitions are among
the indicators, rather than expenditure for research (as a proportion of
the total institutional budget), which is a U-Map indicator.

Two levels of information


The information in U-Multirank allows for comparison of the perfor-
mance of, for instance, the research dimension of higher education
institutions that according to U-Map are similar in the sense of devoting
resources to research – again, for instance – in the top 25% of all higher
education institutions. This is an example of the focused institutional
rankings that U-Multirank aims to produce. It is not intended, and
indeed is not easy, to read in it whether university X is ‘better’ than
University Y – U-Multirank does not aim to lead to such overall, rarely
realistic,1 statements.
The other type of ranking that U-Multirank aims to make, are field-based
rankings. Adding this more detailed level is the way in which U-Multirank
Transparency about Multidimensional Activities 63

responds to the criticism made above about current rankings glossing


over intra-institutional diversity. In the first phase project, study pro-
grammes in two areas were studied in detail: engineering and business
studies. In groups of comparable higher education institutions (chosen
with the aid of U-Map technologies), additional information was collected
about either engineering (50 electrical engineering, 58 mechanical
engineering programmes) or business studies (57 programmes). In sub-
sequent phases, more fields will be added.
The main target group for field-based rankings, next to the managers
and academics of the schools concerned, are prospective students who
want to make informed decision about the place for them to study their
field of choice. Accordingly, partly different indicators were used than
at the institutional level, and additional information was sought that
would be of interest to prospective students. In particular, student sur-
veys (n⫽5,901) focusing on their satisfaction with education were added.
The researchers were pleased to see that there did not seem to be major
biases in students’ responses to the questionnaires across the countries
involved, which ranged from EU countries to (for ca. 30% of the institu-
tions) countries outside Europe (Van Vught and Ziegele, 2012, p. 138).

Conclusion and state of affairs of U-Map and


U-Multirank in 2012

By 2012, U-Map had become operational and the roll out phase had
started. There was sufficient stability in the methodology and indica-
tors to focus on adding higher education institutions from different
(European) countries to the database. In 2013 the Profile Finder and
Profile Viewer tools became publicly operational, with over 300 higher
education institutions in the database.
Regarding U-Multirank, as mentioned, its first project was concluded
in 2011. That project was a ‘proof of concept’. Given the character of
this first project, ranking results have not been published. Moreover,
ranking a good 100 higher education institutions, or around 50 study
programmes in three fields of study would not make much sense.
The publication resulting from the project focused on the feasibility
of the indicators, data collection methods and so on (Van Vught and
Ziegele, 2012).
At the moment of writing, the second project, the first large-scale
implementation of U-Multirank, has started. By 2014 it is scheduled to
lead to a ranking that includes around 500 higher education institu-
tions from around the world.
64 Quality Assurance in Higher Education

Whatever the outcome of that project, and whatever the viability


of U-Multirank as an independent ranking tool afterwards, the project
has given new impetus to the discussion on current global rankings of
higher education institutions. Showing that a more multidimensional
approach is conceptually valid and in principle even feasible, seems to
have influenced the current leaders of the ranking field, the ARWU and
the Times Higher Education (THE) rankings. In recent years, the THE has
reviewed its methodology radically and has tried to broaden its range
of indicators on educational quality. Besides this, both THE and ARWU
have expanded their tools for users to become somewhat more indi-
vidualised: it is now possible on their websites to re-rank institutions
according to separate dimensions (for example, research, education,
reputation) or to view rankings of different fields of knowledge within
higher education institutions.
Seeing that development, it seems warranted to say that the contribution
of U-Map and U-Multirank to a more encompassing conceptualisation
of quality of higher education is already visible.

Note
1. University X is better than University Y if and only if it has a higher score
on at least one indicator and not a single worse score (that is, there is weak
dominance of X over Y, in mathematical terms).

References
Bod, R. (2010) De vergeten wetenschappen: Een geschiedenis van de humaniora. [The
forgotten sciences: A history of the humanities] (Amsterdam: Bert Bakker).
Dulleck, U. and Kerschbamer, R. (2006) ‘On Doctors, Mechanics, and Computer
Specialists: The Economics of Credence Goods’, Journal of Economic Literature,
44(1), 5–42.
Elton, L. (2004) ‘Goodhart’s Law and Performance Indicators in Higher
Education’, Evaluation and Research in Education, 18(1–2), 120–28.
Espeland, W.N. and Sauder, M. (2007) ‘Rankings and Reactivity: How Public
Measures Recreate Social Worlds’, American Journal of Sociology, 113(1), 1–40.
Gaberscik, G. (2010) ‘Überlegungen zum Thema Qualität in Lehre und Studium
sowie Forschung und Technologie’, Qualität in der Wissenschaft, 4(2), 37–47.
Hazelkorn, E. (2011) Rankings and the Reshaping of Higher Education: The Battle for
World-Class Excellence (London: Palgrave Macmillan).
Marginson, S. (2006) Global University Rankings: Private and Public Goods. Paper
presented at the 19th Annual CHER conference, 7–9 September, Kassel.
McCormick, A. and Zhao, C.-M. (2005) ‘Rethinking and Reframing the Carnegie
Classification’, Change (September/October), 51–57.
Neave, G. (1995) ‘Homogenization, integration and convergence: The Cheshire
cats of higher education analysis’, in V. Lynn Meek, L. Goedegebuure,
Transparency about Multidimensional Activities 65

O. Kivinen and R. Rinne (eds), The Mockers and Mocked: Comparative Perspectives
on Differentiation, Convergence and Diversity in Higher Education (Oxford:
Pergamon), pp. 26–41.
Van den Broek, A., de Jong, R., Hampsink, S. and Sand, A. (2006) Topkwaliteit
in het hoger onderwijs: Een verkennend onderzoek naar kenmerken van topkwaliteit
in het hoger onderwijs (The Hague: Ministerie van Onderwijs, Cultuur en
Wetenschap).
Van der Wende, M. and Westerheijden, D.F. (2009) ‘Rankings and Classifications:
The Need for a Multidimensional Approach’, in F. van Vught (ed.), Mapping the
Higher Education Landscape: Towards a European Classification of Higher Education
(Dordrecht: Springer), pp. 71–86.
Van Vught, Frans A. (2008) ‘Mission diversity and reputation in higher educa-
tion’, Higher Education Policy, 21(2), pp. 151–174.
Van Vught, F.A. and Ziegele, F. (eds) (2012) Multidimensional Ranking: The Design
and Development of U-Multirank (Dordrecht: Springer).
Van Vught, F., File, J., Kaiser, F., Jongbloed, B. and Faber, M. (2011) U-Map:
A University Profiling Tool – 2011 Update Report (Enschede: CHEPS, University
of Twente).
Vlasceanu, L. and Barrows, L. (eds) (2004) Indicators for Institutional and Programme
Accreditation in Higher/Tertiary Education (Bucharest: UNESCO-CEPES).
Westerheijden, D.F. (2005) ‘Pieken op de kaart: Excellente opleidingen zichtbaar
maken. Een haalbaarheidsonderzoek’, IHEM Thematische Rapporten (Enschede:
CHEPS).
5
Assessment of Higher Education
Learning Outcomes (AHELO):
An OECD Feasibility Study
Diana Dias and Alberto Amaral

Introduction

Higher education is a social and historical phenomenon that contributes


to triggering global educational processes and personal development.
Social demands and contemporary socio-political scenarios lead to the
need to build global professional profiles. On the one hand, techni-
cal, scientific, conceptual and methodological skills are valued. On the
other hand, the experiences and observations from practice, shared and
re-signified by relational processes, are not ignored. As a result, research
and public policy increasingly value the continuous development of
skills, such as autonomy, creativity and critical thinking.
The development of a professional profile that is expected by higher
education graduates, regardless of their scientific field, is a long and
complex process that results from the dynamic interweaving of the life
history of the subject, his or her educational trajectory, experiences and
labour relations, and the social recognition of the profession related
to the peculiarities of the cultural and historical moment. It is, thus,
a multi-determined process, anchored in the development of several
skills, in which a set of elements will be integrated, added and reconfig-
ured throughout the life of the subjects.
An increasing number of authors criticise quality assurance processes
for not addressing the core business of higher education institutions,
namely teaching and research. Harvey and Newton (2006, p. 236) argue
in favour of transforming ‘quality assurance in the direction of the
improvement of the student experience requires . . . creating conditions
for bringing about sustained change and improvement in institutions’.
For them, the preponderant forms of external quality assurance pro-
cesses ‘hijack and mystify quality as a politically motivated, ideological,

66
(AHELO): An OECD Feasibility Study 67

compliance structure . . . “quality” no longer has anything to do with


academic endeavour: knowledge creation and student learning. Even
improvement-led approaches remain imbued with an ideology that
distrusts the academy’ (ibid., p. 237). Harvey and Newton also consider
that in general, at present, the quality assurance process ‘is a bureau-
cratic process quite removed from either the student learning or the
creative research processes, which, it is argued, lies at the heart of quality
in higher education’ (ibid., p. 226).
Douglas Bennett (2001, p. 1) argues that the only valid approach for
assessing the quality of education offered by a higher education insti-
tution is based on the value added, meaning ‘what is improved about
students’ capabilities or knowledge as a consequence of their education
at a particular college or university’, or more simply, the difference a
higher education institution makes in their education. However, as he
(2001) also recognises, assessing value added is difficult for a number of
reasons, such as its many dimensions, differences between institutions,
time needed for the consequences of education to fully unfold, and
complexity and cost. Therefore, he considers that a second-best and
more feasible strategy will consist of assessing outcomes, by evaluat-
ing the skills and capabilities students have acquired as they graduate
(or shortly after), or the recognition they gain in further competition.
The OECD has launched an ambitious project for measuring learn-
ing outcomes. As stated in the presentation leaflet of this project,
Assessment of Higher Education Learning Outcomes (AHELO), OECD
(2009a) claims it is necessary to develop ‘an assessment that compares
learning outcomes in an universally sound manner, regardless of culture,
language, differing educational systems and university missions’ while
considering that ‘current university rankings may do more harm than
good because they largely ignore a key measure of quality, namely what
goes on in the seminar rooms and lecture theatres’.
The initial phase of the AHELO project was the development of a
feasibility study that is now completed, and the results of which were
made public quite recently. In this chapter we discuss the developments
of initiatives using learning outcomes, as well as the OECD AHELO
project and the results of the feasibility study.

The European context: Bologna and learning outcomes

The signing of the Bologna Declaration has influenced the development


of quality assurance processes in European countries. Adams describes
learning outcomes as ‘a fundamental building block of the Bologna
68 Quality Assurance in Higher Education

educational reforms’ (Adams, 2006, p. 3). In particular, the implementation


of the Framework for Qualifications of the European Higher Education
(FQ-EHEA) has stimulated intense discussion about the use of generic
descriptors for each of the three cycles of study that are based on learn-
ing outcomes. In fact, the descriptors of FQ-EHEA (based on the Dublin
Descriptors, which afford general statements of the typical student
achievements on conclusion of each Bologna cycle) set out generic
learning outcomes for the three cycles, functioning as references and
promoting cross-national transparency, recognition and comparability.
Adopting the definition proposed by Adams (2008), learning out-
comes are statements of what a student should know, understand and/or
be able to demonstrate at the end of a period of learning. Kennedy, Hyland
and Ryan (2006) refer to numerous definitions of learning outcomes, all
of them quite similar. The ECTS Users’ Guide (European Commission,
2009) defines learning outcomes as ‘verifiable statements of what
learners who have obtained a particular qualification, or completed
a programme or its components, are expected to know, understand and
be able to do’.
Learning outcomes are assumed, in short, to be an important tool
for describing and defining not only the learning products, but also
their evaluation methodology. Thus, the emphasis is placed on the
consistency of educational goals, in terms of knowledge and skills that
students are expected to attain, which depends significantly on their
study area and on the specific requirements for each cycle of studies.
This approach to teaching-learning processes allows students to know
in advance what they are expected to be familiar with and understand
in a given study programme, and how this will be evaluated. In fact,
learning outcomes explicitly define not only what it is expected that
students will be able to do, but also the criteria that will be used to
evaluate them. Extensive literature describes the importance of self-
regulatory learning skills of students in higher education, suggesting
their positive impact on the quality of learning (Almeida, 2002; Pintrich,
2004; Ribeiro and Smith, 2007; Schunk, 1994; Zimmerman, 2000).
There is no explicit mention of the concept of learning outcomes,
either in the Bologna Declaration (1999), or in the Prague Communiqué
(Prague Communiqué, 2001). However, the implementation of European
directives led to their emergence as a key tool for achieving the goals
of such documents. Learning outcomes are widely referred to in vari-
ous documents related to Bologna, including the Berlin Communiqué
(Berlin Communiqué, 2003), in which member states were encouraged
to develop a framework of comparable and compatible qualifications
(AHELO): An OECD Feasibility Study 69

for their higher education systems, which should seek to describe


qualifications in terms of workload, level, learning outcomes, compe-
tences and profile professional output. The ultimate goal would be to
create a comprehensive framework of qualifications for the European
Higher Education Area.
The other important development consists of the definition of the
national qualification frameworks. These, as qualification descriptors
associated with a given education system, function as compelling
instruments in achieving the desired comparability and transparency.
It is intended, therefore, that qualification frameworks are assumed to
be reliable tools in the clear and operational description, not only of
what is expected that students will know, understand and do – based
on a particular qualification (learning outcomes) – but also of how they
can move from one qualification to another within a given education
system and in different countries (mobility). Qualifications frameworks
should therefore especially emphasise learning outcomes and the
processes that lead to them. Learning outcomes are referred to in the
London Communiqué:

We commit ourselves to fully implementing such national quali-


fications frameworks, certified against the overarching Framework
for Qualifications of the EHEA, by 2010. We see the overarching
Framework for Qualifications of the EHEA, which we agreed in
Bergen, as a central element of the promotion of European higher
education in a global context. (London Communiqué, 2007)

This theme was reiterated in the more recent Leuven Communiqué:

We aim at having them [national qualification frameworks] imple-


mented and prepared for self-certification against the overarching
Qualifications Framework for the European Higher Education Area
by 2012. This will require continued coordination at the level of the
EHEA and with the European Qualifications Framework for Lifelong
Learning. (Leuven Communiqué, 2009)

Learning outcomes are also present in the European Standards and


Guidelines (ESG) drafted by the European Association for Quality
Assurance in Higher Education (ENQA) in consultation and co-
operation with the EUA, ESIB and EURASHE, and in discussion with
various relevant networks. The Guidelines are the response to the twin
mandates given by the European Education ministers to ENQA in the
70 Quality Assurance in Higher Education

Berlin Communiqué of September 2003, to develop ‘an agreed set of


standards, procedures and guidelines on quality assurance’ and ‘to
explore ways of ensuring an adequate peer review system for quality
assurance and/or accreditation agencies or bodies’. The Standards and
Guidelines for Quality Assurance in the European Higher Education
Area, as proposed by ENQA, were adopted by the European ministers in
the 2005 Bergen Ministerial Bologna Conference Communiqué, and in
2007 the European Ministers, in the 2007 London Bologna Conference
Communiqué, endorsed the proposal of the E4 group (ENQA, EUA,
EURASHE and ESIB) to create a European register of accredited quality
agencies that will be ‘voluntary, self-financing, independent and
transparent. Applications for inclusion on the register should be evalu-
ated on the basis of substantial compliance with the ESG, evidenced
through an independent review process endorsed by national authori-
ties, where this endorsement is required by those authorities’ (London
Communiqué, 2007).
The ESG clearly state that quality assurance programmes and awards
for internal quality assurance within higher education institutions are
expected to include ‘development and publication of explicit intended
learning outcomes’ and student assessment procedures are expected
‘to be designed to measure the achievement of the intended learning
outcomes and other programme objectives’ (ENQA, 2005, p. 17).
Understanding the process of introduction and effective implementa-
tion of learning outcomes in the 46 signatory countries to the Bologna
Process is risky, because information is both scarce and in most cases
unreliable. In fact, setting workable learning outcomes appears to be
rather problematic in practice (Sin, 2013). However, it is clear that there
is a Europe-wide movement that is aimed at the effective implementa-
tion of learning outcomes, despite recognising it is a slow process, at both
national and institutional levels (Bologna Seminar, 2008). Nevertheless,
Adams (2008) did not view this slowness as a negative situation.
Since learning outcomes are part of a comprehensive reform package
involving huge structural changes (from macro to micro levels), and
covering not only qualification frameworks, but also the institutional
quality assurance systems, as well as an extensive curriculum reform, he
considers that such changes require careful and slow implementation.
However, Adams criticises the ‘poor level of understanding associated
with them and their relatively rare practical implementation, at least
in any explicit manner, across Europe, despite their acknowledged
importance’ (Adams, 2006, p. 3). He highlights that few countries have
detailed experience of learning outcomes at both the institutional
(AHELO): An OECD Feasibility Study 71

and national levels, therefore presenting a significant challenge to the


Bologna Process implementation.
Although the definition of learning outcomes proposed under the
Bologna Declaration relies on the general agreement of the signatory
countries, how to implement them does not seem to command the
same consensus. There is little detailed information on the level of
implementation at both the national and institutional levels. In this
scenario, the Scottish and Irish systems seem to stand out positively,
as they use learning outcomes as a basis for the construction of their
own qualifications frameworks and the conceptualisation of diffe-
rent descriptors (either of degree, generic qualification, theme and/or
courses).
In the final report of the Bologna Seminar (2008) it is stated that,
apart from Scotland and Ireland, England, Wales and Northern Ireland
are seen as pioneers in the use of learning outcomes in higher edu-
cation. Countries such as Belgium, Croatia, Estonia, Hungary, Italy,
Moldova, Portugal, Romania, Spain and Switzerland have also made
rapid progress toward a more comprehensive implementation of learn-
ing outcomes. In fact, Belgium (Flemish Community), Hungary, Ireland,
Italy, Slovakia, Spain and the UK have developed (or are in advanced
stages of implementing) integrated systems in which learning outcomes
are present at all levels of their education systems. In contrast, Estonia,
Greece, Lithuania and Latvia still present little development in terms of
the implementation of learning outcomes.

The influence of OECD

There is no doubt that the Organisation for Economic Cooperation


and Development (OECD) and a number of other international agen-
cies, such as the United Nations’ Educational, Scientific and Cultural
Organization (UNESCO), the World Bank (WB) and the International
Monetary Fund (IMF) exercise a pervasive influence over nation states
and over education policies within the framework of neo-liberalism and
its more diffuse expression, globalisation (Amaral and Neave, 2009).
However,

The OECD has not been able to develop a strong instrumentality of


governance. It lacks the financial clout of the IMF or the World Bank.
Nor, unlike the European Union, does it have a legislative capacity.
In short it has no legal instrument to force decisions on its member
countries to implement policies. (Martens et al., 2004, p. 159)
72 Quality Assurance in Higher Education

Martens and colleagues (2004) suggest that the OECD has acquired a
strong capacity for coordination by means of organising procedures
and handling the treatment of their outcome, which in turn shapes the
initiatives and options that may be entertained in a particular field
of policy (Amaral and Neave, ibid.). Henry et al. (2001) assert that the
OECD, through its work on educational indicators, has gained ‘a climate
of support among policy-makers and analysts across member countries
and even beyond’ (Henry et al., 2001, p. 88). This ability of the OECD
to shape expert opinion, without having developed a strong governance
instrumentality – it lacks both financial clout and legislative capacity
(Amaral and Neave, ibid.) – is strongly supported by cross national and
comparative reports and educational statistics and indicators, such as
Education at a Glance and the Performance Indicators of School Achievement
(PISA). Indeed, it is well known that much of the power exhibited by
the OECD in acting as a powerful agent in the convergence of higher
education national policies has to do with its technical capacity, namely
its capacity to provide reliable education statistics (Amaral and Neave,
2009) using very sophisticated quantitative tools, such as:

regular, up-to-date and exceedingly high quality data and informa-


tion systems, functioning cross nationally, and what we have termed
an ‘indirect strategy’ of development, based on peer review, high-
level networking and on the recourse to what is sometimes alluded
to as ‘soft’ law . . . (Amaral and Neave, 2009, p. 95)

This power has been clearly reinforced by the success of successive PISA
exercises at the level of primary and secondary education. The Program
for International Student Assessment (PISA) is a standardised OECD
test given to 15-year-olds in OECD countries in order to judge the
effectiveness of the school system by assessing the learning outcomes
of students.
More recently, the OECD has decided to extend its influence over
higher education by creating a new PISA – the AHELO project – for
this very specific sector of education. The 2006 OECD Ministerial
Conference in Athens, concerning quality, equity and efficiency, offered
a golden opportunity that the OECD eagerly seized to strengthen its
influence. In Athens, the Ministers discussed at length how to move
from quantity to quality of higher education, and the OECD Secretary-
General offered the assistance of the organisation in developing new
measures of learning outcomes in higher education, drawing upon its
experience with the PISA survey. In the summary of the Conference
(AHELO): An OECD Feasibility Study 73

presented by the Greek Minister, it was acknowledged that the Ministers


had accepted the OECD offer. Therefore, AHELO was born from this
idea of extending a PISA-like approach to higher education.
In a very interesting paper appropriately entitled Boomerangs and
Trojan Horses: the Unintended Consequences of Internationalizing Education
Policy through the EU and the OECD, Martens and Wolff (2009) explain
how nation-states enlist the support of international agencies to sup-
port national policies, aiming to resort strategically to an intergovern-
mental policy arena in order to ‘manipulate the existing distribution
of formal institutional competencies in their domestic policy systems’
(Martens and Wolff, 2009, p. 77). However, nation-states will frequently
lose control of the process, thus promoting an internationally trig-
gered institutional dynamic ‘which backfired on its protagonists and
led to the opposite of what was originally intended, namely, a general
weakening of the state’s role in education policy’ (Martens and Wolff,
2009, pp. 77–8). Two of the examples presented by Martens and Wolff
are precisely the Bologna process (a ‘boomerang’ initiated by the
Sorbonne declaration signed by France, Italy, Germany and the UK)
and the PISA process (a ‘Trojan horse’ supported by the pressure exerted
by the United States and France over the OECD to produce better and
more comparable data on education). It remains to be seen how far the
AHELO project may follow a similar trajectory.

The AHELO project

Following the Athens Ministerial Conference the OECD started to


develop what initially was seen as a PISA for higher education. The
OECD invested considerable resources in this new project as it opened
the way for an increased role for the organisation in the higher educa-
tion sector. In 2009 the OECD argued that the ‘Assessment of Higher
Education Learning Outcomes (AHELO) is a ground-breaking initiative
to assess learning outcomes on an international scale by creating mea-
sures that would be valid for all cultures and languages’ (OECD, 2009a).
More recently, the OECD/IMHE (2013) suggested that the proposal
to explore the development of an international AHELO emerged from
the need to develop better performance metrics in higher education.
Thus, four trends could be identified from the AHELO rational: (i) a
move beyond collegial approaches to governance (the emergence of
new governance paradigms which combine greater autonomy with
increased transparency and accountability leads to increased demands
for institutions to engage in outcomes assessment); (ii) a growing focus
74 Quality Assurance in Higher Education

on student learning outcomes (the Bologna Declaration aimed to


establish a European Higher Education Area and to write all higher edu-
cation programmes in terms of learning outcomes by 2010); (iii) an emphasis
on student centred learning and research on teaching-learning
processes (the shift from an ‘instruction paradigm’ towards a ‘learning
aradigm’ in which the emphasis is no longer on the means but on
the ends, assumes the assessment of outcomes is a crucially important
factor for the evaluation of instructional effectiveness) and (iv) AHELO
within the broader movement towards competencies and learning
outcomes. In fact, while AHELO is the first international endeavour to
measure learning outcomes across borders, languages and cultures, it is
part of a broader context of diverse initiatives converging in their focus
on performance, competencies and learning outcomes.
Deborah Nushe (2008) has produced an interesting working paper for
OECD which aims to provide an international perspective on current
practices in standardised learning outcomes assessment in higher edu-
cation, using examples from a number of countries including Australia,
Brazil, Canada, Mexico, the United Kingdom and the United States. She
acted as consultant to the Indicators and Analysis Division of the OECD
Directorate for Education.
The assessed outcomes that were analysed include both cognitive
learning outcomes and non-cognitive learning outcomes. Cognitive learning
outcomes ‘range from domain-specific knowledge to the most general
of reasoning and problem-solving skills’ (Shalveson and Hunag, 2003,
p. 13). The OECD report only considers a division of cognitive learning
outcomes into knowledge outcomes involving the ‘remembering, either
by recognition or recall, of ideas, materials or phenomena’ (Bloom,
1956, p. 62) – and it includes general content knowledge outcomes and
domain-specific or subject-specific learning outcomes – and skills outcomes,
again divided into generic and domain-specific.
A non-cognitive learning outcome refers to changes in beliefs or the
development of certain values (Ewell and Miller, 2005). Studies on non-
cognitive outcomes often focus on the presence of certain theorised
stages of identity development (Pascarella and Terenzini, 2005) and may
be developed both through classroom instruction and out-of-class activi-
ties organised by HEIs to supplement the curriculum. However, the defi-
nition of desirable non-cognitive learning outcomes is controversial and
subject to cultural contexts and not always shared by all stakeholders.
Some studies suggest that non-cognitive outcomes are rather related to
social maturation, generational effects (Pascarella and Terenzini, 2005) or
‘significant life events’ (Glenn in Pascarella and Terenzini, 2005, p. 272).
(AHELO): An OECD Feasibility Study 75

The report concentrates its analysis on four themes, aiming to provide


information on existing efforts to measure learning outcomes, thus
providing a basis for the development of AHELO:

(a) What is being assessed, with focuses on outcomes assessed by the


different instruments used?
(b) How are these outcomes being assessed, with focus on processes
for designing, administering and reporting on the test, as well as
on technical details of assessment instruments (format, number of
items and duration of assessment)?
(c) Who is each instrument going to assess, describing the characteris-
tics of target populations (selection, coverage of test application and
incentives to participate)?
(d) Why is the assessment being applied, detailing the nature of the
target (individual students, programmes, institutions or education
systems) and the type of possible results and its use by stakeholders?

To develop AHELO, OECD proposed that a large number of higher


education students in over ten different countries should take part
in a feasibility study to determine the bounds of this ambitious pro-
ject, aimed upon its completion at the possible creation of a full-scale
AHELO. The design and implementation of the feasibility study, its
problems and the analysis of its main results will be presented in the
next section.
However, from its very start many received the AHELO project with
a considerable degree of suspicion and criticism. In November 2007,
Education International, the global federation of teacher unions, issued
a document with a strong position against AHELO based on a num-
ber of considerations, including acute methodological problems (the
extreme difficulty of designing a tool capable of producing any mean-
ingful and comparable measure across the different cultures, languages,
disciplines and institutions within OECD countries); the exceeding
difficulty of determining ‘what’ and ‘who’ to assess, as well as ‘what to
compare’; serious limitations with all standardised measures of learning;
a notable lack of consensus on what should be the appropriate practices
and outcomes of higher education; no particular use unless information
is also provided on the educational context; and the misuse and
misinterpretation of results. Education International also asserted that
the quality of higher education is neither a measurable product nor an
outcome subject to any simple assessment, and that external standardised
assessments raise important issues around professional autonomy for
76 Quality Assurance in Higher Education

academic staff. A further concern was that a PISA for higher education
could easily be transformed into a simplistic ranking or league table of
institutions.
Our past experience (see Martens and Wolff, 2009) shows that once
open, Pandora’s box is quite difficult to close, even when powerful
governments are involved. Despite technically well grounded negative
opinions, these international organisations will always push forward
the implementation of their ‘star’ projects. Therefore it was left to hope
that some of OECD’s soothing declaration would come true:

AHELO is not a university ranking like the Shanghai Jiao Tong, the
Times Higher Education or any number of others. The designers of
AHELO reject the idea that higher education can be reduced to a
handful of criteria, which leaves out more than it includes. Instead,
AHELO sets out to identify and measure as many factors as possible
influencing higher education, with the emphasis being always on
teaching and learning. (OECD, 2009a)

The feasibility study

The initial project


Before implementing a full AHELO exercise, IMHE (OECD), in dis-
cussion and with the support of both governments and institutions,
decided to demonstrate its practical validity by embarking:

on a feasibility study to explore the scope for developing an interna-


tional Assessment of Higher Education Learning Outcomes (AHELO).
The purpose is to gauge whether an international assessment of
higher education learning outcomes that would allow comparison
between HEIs across countries is scientifically and practically feasi-
ble. (Yelland, 2008, p. 7)

The OECD’s initiative to assess the feasibility of AHELO maintains a


clear focus on teaching and learning and tries to identify a wide range
of factors influencing higher education. The OECD had very high
expectations about the new project, suggesting it may very well result
in substantial changes to higher education as we know it today:

The AHELO feasibility study is likely to discover much that is


unrelated to learning outcomes. What these findings will reveal
no one can say. But the chance is they may fundamentally change
(AHELO): An OECD Feasibility Study 77

our thinking about higher education and its role in society.


(OECD, 2009a)

The initial design of the feasibility study of the AHELO programme con-
sisted of four ‘strands’: three assessments to measure learning outcomes
in terms of generic skills and discipline-related skills (in engineering
and economics) and a fourth value-added strand, research based. The
measurement of generic skills (for example, analytical reasoning, criti-
cal thinking, problem-solving, the practical application of theory, ease
in written communication, leadership ability, the ability to work in a
group and so on) was based on an adaptation of the Collegiate Learning
Assessment (CLA) developed in the United States. For the discipline-
based strands, the feasibility study was focused on disciplines with less
variable study outcomes across countries and cultures, such as medi-
cine, the sciences or economics, building on the approach used in the
Tuning Process for Engineering and Economics.
The value-added strand was not supposed to be measured, as this
would not be compatible with the timeframe of the feasibility study.
Therefore, it was decided that ‘the feasibility study will only explore
different methodologies, concepts and tools to identify promising
ways of measuring the value-added component of education’ (OECD,
2009a, p. 10). Actually, some stakeholders considered that quality in
higher education institutions was closely related with the ‘upgrade’ in
student learning, as a good indicator of school effectiveness. That is,
the students’ learning improvement was crucial to understanding the
contribution of higher education institutions to student learning. Thus,
not only students learning outcomes should be measured at the end of
the students’ studies, but also growth in learning so as to portray the net
contribution of the institutions to student learning – or the value added.
The OECD was also aware of the importance of context, although it
also recognised the difficulty of context measurement. The feasibility
study aimed to define the limits of a contextual inquiry and divided
context into four topical areas: physical and organisational charac-
teristics, education-related behaviours and practices, psycho-social
and cultural attributes, and behavioural and cultural attributes. In the
proposed model, student learning outcomes ‘are a joint product of
input conditions and the environment within which learning takes
place’ (OECD, 2009b, p. 4):

Inputs can include student characteristics such as incoming


abilities and demographic characteristics . . . which have been shown
78 Quality Assurance in Higher Education

by research to be related to learning (Pascarella and Terenzini, 1991,


2005). Environment consists of the setting in which learning takes
place, the curricula and pedagogies that constitute the medium of
instruction, and student learning behaviours. All these three have
been similarly related to student learning outcomes through decades
of empirical study (Pascarella and Terenzini, 1991, 2005; Kuh, 2008).
(OECD, 2009, p. 4)

Implementation
The feasibility study aimed to test the scientific and practical feasibility
of designing instruments for the assessment of learning outcomes across
diverse national, cultural, linguistic and institutional contexts. The pur-
pose of the feasibility study was to see whether it was practically and
scientifically feasible to assess what students in higher education know
and can do upon graduation within and across these diverse contexts
and if tests can indeed be developed and administered to students. The
feasibility study should demonstrate what is feasible and what could
be feasible, what has worked well and what has not, and should provide
lessons and stimulate reflection on how learning outcomes might be
most effectively measured in the future. In fact, the feasibility study
was designed to explore how learning outcomes could be measured
internationally, providing actual data on the quality of learning and its
relevance to the labour market and their results should be comparable
internationally regardless of language or cultural background.
The implementation of the test was a large-scale exercise with the par-
ticipation of a total of 248 higher education institutions in 17 countries
from different regions of the globe, and the instruments were adminis-
tered to almost 4,900 faculty and 23,000 students chosen from among
those near the end of a Bachelor’s degree. The OECD worked with a
consortium of world experts and teams in the participating countries
to develop and administer the tests. A Technical Advisory Group (TAG)
composed of eight international experts and chaired by Peter Ewell was
responsible for providing advice on matters such as instrument devel-
opment, translation and adaptation procedures, validation activities,
scoring and verification procedures and feasibility evaluations. The TAG
was also asked to review and provide feedback on documents when so
requested.
The feasibility study was implemented in three phases. The first phase
involved the development of the provisional assessment frameworks
and testing instruments appropriate for an international context for
each strand of work (generic skills, economics and engineering), and
(AHELO): An OECD Feasibility Study 79

their small-scale validation (cognitive labs and small-scale testing with


students) across languages and cultures.
The second phase was comprised of the practical implementation
and the psychometric analysis of the results. The three assessment
instruments (generic skills, economics and engineering) and contextual
surveys were implemented in a small group of diverse higher educa-
tion institutions to determine the best ways to implicate, involve and
motivate leaders, faculty and students in the initiative. Furthermore,
the data collected as part of this second phase enabled psychometric
analyses such as bias analyses and in-depth examination of the validity
and reliability of the construct performance measures.
The efficiency of the AHELO feasibility study results depended on the
quality of the instruments used to assess students’ learning outcomes
and to capture the contextual background, which implied setting up a
complex quality control system. Assessment frameworks were developed
to establish the purpose of the assessment and to provide a clear defini-
tion of what was being assessed, a description of the items to be used and
the basis for interpreting the test results. Some assessment instruments
were developed through the creation or selection of items to match
the table of specifications for the frameworks. National translation
and adaptation of assessment instruments and surveys was subject to
quality control to ensure that the small-scale testing would produce
comparable results across countries. Then, small-scale validation of
the instruments through pilot-testing of the items was developed with
students similar to the target population. Lastly, a final review of the
assessment and survey instruments was made, using results from small-
scale validation activities, feedback collected from respondents and
consultations conducted with stakeholders.
However, the generic skills instrumentation did not follow the usual
development process described above. The study design sought to adapt
an existing instrument and did not include the development of an
international version of an assessment framework. Accordingly, work
started with the adaptation of one component of the existing CLA
instrument. This second phase also comprised the actual implementa-
tion, including test administration and scoring of students’ responses,
as well as data analysis, reporting and evaluation of the scientific and
practical feasibility of AHELO (OECD, 2012, p. 91).
Lastly, phase three aimed to explore approaches and methodologies
to identifying added value, meaning the contribution of higher educa-
tion institutions to students’ outcomes, or the students’ learning gain
once their incoming abilities are taken into account. However, the
80 Quality Assurance in Higher Education

difficulties of this task are difficult to overcome (OECD, 2008), which


circumscribed phase three to a short literature review of value-added
measurement approaches followed by a meeting of a panel of value-
added experts.

Results – the views of the TAG


The outcomes of the feasibility study were presented in a report
published by OECD. A first volume, published in December 2012 (OECD,
2012), describes in detail the design and implementation processes,
and a second volume, published in March 2013 (OECD, 2013a), con-
tains detailed information on data analysis and national experiences.
A conference was held in Paris, on 11–12 March 2013, to discuss the
lessons from the feasibility study and to propose the next steps. A third
and final volume was planned for publication in April 2013, containing
further insights and the results of the conference. However publication
was postponed until September.
The feasibility study was confronted with a number of difficulties due
to insufficient resources and a timeline that was far too short. In 2007
the initial idea was that the feasibility study should be limited in scope,
including ‘at least three countries in three languages’, and that assess-
ments be administered ‘from five to ten institutions in each country’
(OECD, 2012, p. 70). However the global fiscal crisis severely limited the
available budget. Financial problems were reported in detail at several
meetings of the AHELO Group of National Experts (GNE). At the fourth
meeting (March 2010) the GNE endorsed a proposal to initiate the first
phase of work to keep momentum while making the second phase
optional or conditional on funding availability. At the sixth meeting
(March 2011) (EDU/IMHE, 2011a, 2011b) it was reported that the finan-
cial budget of AHELO was supported mainly by participating countries
(79%), followed by foundations (14%), the OECD Secretary General
special funds (5%) and non-participants (2%). Indeed, in order to raise
funds several countries were allowed to join the study quite late – being
relatively less well-prepared compared to early country participants –
increasing participation to 17 countries and 248 institutions, which
was probably excessive for a feasibility study. Despite this increase in
participation the budget was still too small and the GNE was invited to
discuss possible options for the second phase, including further fund-
raising and cost reductions (OECD, 2011b) but was unable to reach an
agreement at that meeting.
Peter Ewell (2013), the chair of the TAG, reports that the financial
situation not only had a negative impact on the activities of the TAG
(AHELO): An OECD Feasibility Study 81

(OECD, 2013a, p. 155), but also negatively influence the feasibility


exercise by preventing the implementation of several relevant activi-
ties. However, implementation time was also too short. As recognised
in Volume 1 of the OECD report (OECD, 2012), the testing materials,
online platforms and survey operations procedures had to be delivered,
and National Project Managers (NPMs) and Lead Scorers (LS) had to be
trained within a tight timeframe that left little flexibility in terms of
delivery time. These time constraints also had an impact at the country
level, where NPMs were required to react quickly, without much time
for consulting on and translating the contextual dimension materials
and verifying and validating data files (OECD, 2012, p. 88).
Despite these difficulties, the TAG believes that the AHELO feasibility
study constituted an unprecedented multinational data collection effort
at the level of higher education. Data on student learning outcomes was
collected in three domain strands in 17 different countries or systems
by means of surveys. Although some countries/systems experienced
more difficulties than others, all participating countries reported they
learned something from the experience and most would do it again.
Just as important, the feasibility study generated a range of important
findings about student learning at the higher education level, as well
as dozens of lessons about how such a project should be implemented
in the future. The TAG also believes that overall co-ordination was a
particular strength of the feasibility study. The TAG emphasised some
methodological aspects as positive, such as the assessment adminis-
tration, the technical aspects of the data analysis and the instrument
design for purpose-built instruments.
At the same time, the TAG believes that some aspects of the feasibility
study did not go so well, especially that it was seriously under-resourced
and was implemented on far too short a timeline. More resources and
a broader schedule could have allowed for such important features as
more cognitive interviews and pilots of newly built instruments, full-
scale field trials of administration and scoring arrangements, and more
time for de-briefing and collective discussion of obtained results.
Another weakness recognised by the TAG was related to the difficulty
and contextualisation of constructed-response tasks (CRTs). On the
one hand, although the CRTs used by the engineering and economics
strands were of high technical quality, they were simply too difficult
for many students to engage effectively with and perform well. On the
other hand, the CRTs used in generic skills were based on the CLA and
proved excessively ‘American’ in an international context. The lack
of time also negatively influenced the translation and adaptation of
82 Quality Assurance in Higher Education

materials to national contexts. These aspects might explain the too-low


response rates of students in a number of cases (as little as between 3.5%
and 15% in a number of cases).
Although the TAG considers that the Consortium’s analyses of
the massive amount of data generated by the feasibility study were
exemplary from a technical standpoint, the reporting of these results
through the Consortium’s final report was overly complex, and therefore
difficult to understand, and the report lacked clearly stated conclusions
on which to make policy decisions for the future.
Finally, the contractual arrangements were also seen as a weakness of
the feasibility study. The AHELO feasibility study began with separate
contracts with its two principal contractors – Australian Council for
Educational research (ACER) and Council for Aid to Education (CAE),
developer of the CLA tool – which resulted in poor communication
among the contractors and occasional duplication of effort. Furthermore,
no tendering process was used to procure or develop instruments for the
generic skills strand, which is rather unusual. By the time this situation
was addressed by re-structuring contractual arrangements so that CAE
was a subcontractor of ACER under the Consortium, a habit of inde-
pendence – exacerbated by commercial rivalry – made it difficult for
both parties to establish a culture of partnership.
The TAG lists several additional lessons that should be taken forward
for any international assessment effort of the same size and scale,
including more opportunities for stakeholder participation in assess-
ment design and in the analysis of assessment results. A full-scale test
of all instruments and administration arrangements could enable stake-
holder participation in a ‘design-build’ process that would both pilot
these designs and enable more stakeholder engagement in making them
better, for example in reporting results and sharing data with countries
and institutions. Finally, TAG further recommends that any such study
should be better located and integrated with the international scholarly
community that is examining student learning outcomes and the poli-
cies and practices that support better learning, creating an opportunity
to better align the emerging scholarly and policy dialogue about quality.
On balance, the TAG believes that the AHELO feasibility study was
soundly executed and provided many lessons that will inform inter-
national assessment efforts for many years to come. Among its most
important contributions were recommendations to ensure consistency
of administration and scoring across contexts, steady reinforcement of
the need for contextual data – especially at the beginning of the study,
recommendations to reinstate an MCQ component in generic skills,
(AHELO): An OECD Feasibility Study 83

and recommendations to the OECD Secretariat about how to prepare its


final report. However, the TAG also recognises that the lack of ‘clearly
stated conclusions on which to make policy decisions for the future’
(Ewell, 2013, p. 169) does not allow for ‘definitive conclusions about
feasibility’ to be made at this time (ibid., p. 163).

Conclusions

The OECD recognised (OECD, 2012) that the development of an


international AHELO generated much discussion throughout 2006–8
in higher education circles. While some policy makers, experts and
stakeholders welcomed the idea enthusiastically, participating actively
in its development, others were more critical of the approach and vocal
in pointing to its potential risks, namely that AHELO data would be
used as a ranking, or as a basis for reallocating public resources or fund-
ing within HEIs towards teaching, to the detriment of other missions.
Another preoccupation mentioned was the complexity of engaging in
fair comparisons of extremely diverse institutions in terms of their mis-
sions, profiles and student bodies. Detractors highlighted the limited
information that standardised tests could yield for institutions and
faculties, and the risk of simplistic conclusions. Another problem was
the potential impact on institutional autonomy and academic freedom
and fears that AHELO might be forced on institutions, and could over
time yield homogenisation and constrain academic freedom. Still
another risk was related to the merits and applicability of AHELO’s focus
on generic skills, given the different academic traditions in different
parts of the world and the fundamental debates on the relevance of
assessing generic skills independently from the specific disciplines.
Unfortunately the feasibility study had a number of implementation
problems, including under-funding and an inadequate timeline, while
the final results presented in the report lacked clearly stated conclusions
on which to make policy decisions for the future. Despite these recognised
weaknesses, AHELO was an important exercise that increased under-
standing of the difficulties of implementing an outcomes measure-
ment system that is valid across different languages and cultures. The
feasibility study also elevated the importance of learning outcomes
in the minds of students, academic staff and institutions themselves.
Moreover, there were some country-specific benefits, which have been
reflected in the level of conversations around learning outcomes, and
in the way the assessment methodology has helped to drive pedagogical
reflection.
84 Quality Assurance in Higher Education

However, the future of AHELO remains uncertain. The meeting of the


governing board of IMHE, convened just after the March Conference,
concluded that there  remain deep concerns regarding methodological
aspects and, in the view of a number of members, intractable challenges
associated with developing a common set of standards (OECD, 2013b).
The concerns regarding methodology reflect wider concerns over the
whole purpose of the exercise. These include:

(a) the tensions between whether it is a high-stakes/accountability tool


versus whether it is/can be/should be a low-stakes developmental/
self-improvement mechanism;1
(b) the fact that there are other instruments around, some at the disci-
pline level and others developed at national system level; and
(c) related to the above, significant concerns regarding the number of
instruments available, and the impact of survey activity on students
and their preparedness to engage.

The Board also argued that different countries have different motiva-
tions to engage and are at different stages of development, which results
in different levels of engagement. The report has not provided accurate
data on the costs and benefits of participating in a full-scale exercise –
it can only be estimated from the experience of the feasibility study
that costs will be substantial. At last there was a further short discus-
sion on the issue of low-stakes versus high-stakes approaches. There
was a strong sense that it would not be possible to pursue a low-stakes
approach, which could be contained as such. Inevitably it was seen
that it would become or be used for high-stakes purposes, especially
rankings. Therefore, there was no doubt in the strongly negative overall
sentiment of the Governing Board against the low-stakes approach that
was suggested by the Education Policy Committee (EDPC) in its roadmap
for AHELO longer term development.
The considerations of the Governing Board of the IMHE were con-
veyed to the EDPC but so far no reactions are known. The publication of
the third volume of the AHELO feasibility report, including the results
of the March Conference, was postponed to September 2013. It is prob-
able that a full-scale AHELO will not be possible in the near future, at
least not before the complete analysis of the results of the feasibility
study are completed, the financial situation is fully clarified and an
agreement on the purposes of AHELO is reached.
(AHELO): An OECD Feasibility Study 85

Note
1. A low-stakes approach means the results of the exercise will have conse-
quences for those students and institutions participating in it. This implies
that governments would not receive data in a form that would allow them to
identify the results by higher education institution.

References
Adams, S. (2008) Learning Outcomes Current Developments in Europe: Update on the
Issues and Applications of Learning Outcomes Associated with the Bologna Process.
General conference presented in the Bologna seminar on learning outcomes
based in higher education: The Scottish experiences, 21–22 February 2008, at
Heriot-Watt University, Edinburgh, Scotland.
Adams, S. (2006) ‘An introduction to learning outcomes: A consideration of
the nature, function and position of learning outcomes in the creation of the
European Higher Education Area’, in E. Froment (ed.), EUA Bologna Handbook:
Making Bologna Work, Volume 4 (Berlin: RAABE).
Almeida, L.S. (2002) ‘Facilitar a aprendizagem: ajudar os alunos a aprender e a
pensar’, Psicologia Escolar e Educacional, 6, 155–165.
Amaral, A. and Neave, G. (2009) ‘The OECD and Its Influence in Higher
Education: A Critical Review’, in R.M. Bassett and A. Maldonado-Maldonado
(eds), International Organizations and Higher Education Policy. Thinking Globally,
Acting Locally? (New York and London: Routledge), pp. 82–98.
Bennett, D. (2001) ‘Assessing Quality in Higher Education’, Liberal Education,
87(2), 1–4.
Bergen Communiqué (2005) The European Higher Education Area: Achieving the
Goals. http://www.ond.vlaanderen.be/hogeronderwijs/bologna/documents/
MDC/050520_Bergen_Communique1.pdf (accessed 22 September 2013).
Berlin Communiqué (2003) http://www.ond.vlaanderen.be/hogeronderwijs/
bologna/documents/MDC/Berlin_Communique1.pdf (accessed 22 September
2013).
Bloom, B.S. (ed.) (1956) Taxonomy of Educational Objectives: The Classification of
Educational Goals, Handbook I: Cognitive Domain (New York: McKay).
Bologna Declaration (1999) http://ec.europa.eu/education/policies/educ/bologna/
bologna.pdf (accessed 22 September 2013).
Bologna Seminar (2008) http://www.ehea.info/Uploads/Seminars/BS_P_
Report_20080915_FINAL.pdf (accessed 22 September 2013).
Education International (2007) Assessing Higher Education Outcomes: A ‘PISA’
for Higher Education? November 2007, http://download.ei-ie.org/docs/
IRISDocuments/Education/Higher%20Education%20and%20Research/
Higher%20Education%20Policy%20Papers/2008-00036-01-E.pdf (accessed
22 September 2013).
ENQA (2005) Standards and Guidelines for Quality Assurance in the European Higher
Education Area (Helsinki: ENQA).
European Commission (2009) ECTS Users’ Guide (Luxembourg: Office for Official
Publications of the European Commission).
86 Quality Assurance in Higher Education

Ewell, P. and Miller, M.A. (2005) Measuring up on College-level Learning (San Jose,
CA: National Center for Public Policy in Higher Education).
Ewell, P. (2013) ‘Role of the AHELO Feasibility Study Technical Advisory Group
(TAG)’, Assessment of Higher Education Learning Outcomes, AHELO Feasibility
Study Report, Volume 2: Data Analysis and National Experiences (Paris: OECD),
pp. 152–71.
Harvey, L. and Newton, J. (2006) ‘Transforming quality evaluation: moving
on’, in D. Westerheijden, B. Stensaker and M.J. Rosa (eds), Quality Assurance
in Higher Education: Trends in Regulation, Translation and Transformation
(Dordrecht: Springer), pp. 225–45.
Henry, M., Lingard, B., Rizvi, F. and Taylor, S. (2001) The OECD, Globalisation and
Education Policy (Oxford: Pergamon and IAU Press).
Kennedy, D., Hyland, A. and Ryan, N. (2006) ‘Writing and using Learning
Outcomes’ in Bologna Handbook, Implementing Bologna in Your Institution, C3.4-
1, pp. 1–30.
Khu, G.D. (2008) High-Impact Educational Practices: What They Are, Who Has
Access to Them, and Why They Matter (Washington, DC: Association of
American Colleges and Universities).
Leuven Communiqué (2009) The Bologna Process 2020: The European
Higher Education Area in the New Decade. http://www.ond.vlaanderen.be/
hogeronderwijs/bologna/conference/documents/Leuven_Louvain-la-Neuve_
Communiqué_April_2009.pdf (accessed 22 September 2013).
London Communiqué (2007) Towards the European Higher Education Area: Responding
to Challenges in a Globalised World. http://www.ond.vlaanderen.be/hogeronder-
wijs/bologna/documents/MDC/London_Communique18May2007.pdf
(accessed 22 September 2013).
Martens, K., Balzer, C., Sackmann, R. and Weymann, A. (2004) ‘Comparing
Governance of International Organisations – The EU, the OECD and
Educational Policy’, TransState Working Papers No.7, Sfb597 ‘Staatlichkeit im
Wandel – (Transformations of the State)’, Bremen.
Martens, K. and Wolff, K.D. (2009) ‘Boomerangs and Trojan Horses: The
Unintended Consequences of internationalizing Education Policy through the
EU and the OECD’, in A. Amaral, G. Neave, P., C. Musselin and P.A.M. Maassen
(eds), European Integration and Governance of Higher Education and Research
(Dordrecht: Springer), pp. 81–107.
Nushe, D. (2008) Assessment of Learning Outcomes in Higher Education:
A Comparative Review of selected Practices, OECD Education Working Paper
No. 15, (Paris: OECD).
OECD Ministerial Conference (2006) ‘Summary’, http://www.oecd.org/greece/
summarybythegreekministerofnationaleducationandreligiousaffairsmariet-
tagiannakouaschairofthemeetingofoecdeducationministers.htm (accessed 22
September 2013).
OECD (2008) Measuring Improvements in Learning Outcomes (Paris: OECD
Publishing).
OECD (2009a) Assessment of Higher Education Learning Outcomes, leaflet (Paris:
OECD).
OECD (2009b) Analytical Framework for the Contextual Dimension of the AHELO
Feasibility Study (Paris: OECD).
(AHELO): An OECD Feasibility Study 87

OECD (2012) Assessment of Higher Education Learning Outcomes, AHELO Feasibility


Study Report. Volume 1: Design and Implementation (Paris: OECD).
OECD (2013a) Assessment of Higher Education Learning Outcomes, AHELO Feasibility
Study Report. Volume 2: Data Analysis and National Experiences (Paris: OECD).
OECD (2013b) 17th meeting of the IMHE Governing Board. Draft Summary Record,
ECU/IMHE/GB/M(2013)1 (Paris: OECD).
OECD/IMHE (2011a) Sixth Meeting If the AHELO Group of National Experts. Draft
Summary Report, EDU/IMHE/AHELO/GNE/M(2011)1 (Paris: OECD).
OECD/IMHE (2011b) Possible Options and Business Models for Phase 2 –
Implementation, EDU/IMHE/AHELO/GNE(2011)9 (Paris: OECD).
OECD/IMHE (2013) http://www.oecd.org/site/ahelo/ (accessed 22 September
2013).
Pascarella, E.T. and Terenzini, P.T. (1991) How College Affects Students (San
Francisco: Jossey-Bass).
Pascarella, E.T. and Terenzini, P.T. (2005) How College Affects Students: Volume 2,
A Decade of Research (San Francisco: John Wiley & Sons, Inc).
Pintrich, P.R. (2004) ‘A conceptual framework for assessing motivation and
self-regulated learning in college students’, Educational Psychology Review, 16,
385–407.
Prague Communiqué (2001) http://www.ond.vlaanderen.be/hogeronder
wijs/bologna/documents/MDC/PRAGUE_COMMUNIQUE.pdf (accessed
22 September 2013).
Ribeiro, I.S. and Smith, C.F. (2007) ‘Auto-regulação: Diferenças em função do
ano e área em alunos universitários’, Psicologia: Teoria e Pesquisa, 23, 443–48.
Schunk, D.H. (1994) ‘Self-regulation of self-efficacy and attributions in academic
setting’, in D.H. Schunk and B.J. Zimmerman (eds), Self-regulation of Learning
and Performance: Issues and Educational Applications (Hillsdale: NJ: Erlbaum),
pp. 75–99.
Shalveson, R.J. and Huang, L. (2003) ‘Responding Responsibility to the Frenzy to
Assess Learning in Higher Education’, Change, 35(1), 11–19.
Sin, C. (2013) ‘Lost in translation: the meaning of learning outcomes across
national and institutional policy contexts’, Studies in Higher Education, http://
www.tandfonline.com/eprint/7StsJXGJ7jTKnzSreYnM/full#.Ufp9jm2nSHN
(accessed 22 September 2013).
Zimmerman, B.J. (2000) ‘Self-efficacy: An Essential Motive to Learn’, Contemporary
Educational Psychology, 25, 82–91.
Yelland, R. (2008) ‘The OECD Programme on Institutional Management in Higher
Education (IMHE), Activities Report 2006’, http://www.oecd.org/dataoecd/
10/30/41379465.pdf (accessed 22 September 2013).
6
Risk, Trust and Accountability
Colin Raban

Introduction

In June 2011 the UK Government published a White Paper setting out


its plans for the reform of higher education. These included the intro-
duction of what it described as a ‘genuinely’ risk-based approach to
the regulation of universities.1 The White Paper instructed the English
Funding Council to consult on the criteria against which risk should be
assessed and the frequency with which institutions should be reviewed.
The anticipated outcome was a ‘very substantial deregulatory change for
institutions that can demonstrate low risk’ (BIS, 2011, paras 3.19–3.20).
This was not the first time that the sector had been promised such
an approach. More than a decade earlier the Funding Council had
itself adopted a risk-based method for the regulation of universities’ cor-
porate affairs. Then, in 2000, it required institutions to introduce their
own processes ‘for identifying, evaluating and managing risks’, and
this was closely followed by the publication of the Quality Assurance
Agency’s (QAA) new methodology for institutional review (HEFCE,
2000). The new method was immodestly described as ‘a major evolu-
tionary step’ that would ‘bring much closer the possibility of a reliable
process in which outside intervention . . . is truly in direct relation to
the risk’ (QAA, 2002a, para 69). Since then, the term ‘risk’ has been used
with growing frequency in QAA audit reports and the idea of risk-based
quality management has emerged as a major theme in the agency’s
recent advice to institutions (Raban, 2008; QAA, 2011, 2012).
Hyperbole has punctuated higher education’s flirtations with risk-
based regulation. ‘Very substantial change’ is a late version of what,
ten years earlier, had been a ‘major evolutionary step’. By describing its
risk-based approach as ‘genuine’, the White Paper seemed to signal a

88
Risk, Trust and Accountability 89

decisive departure from earlier forays in the field. We may be at a critical


juncture in the regulation of higher education, although the exact
nature of this new departure is at present unclear and the implications
for institutions are uncertain.
The first purpose served by this paper is to consider the nature and
significance of the new approach. I shall also argue that, in discharging
their responsibilities for assuring the quality and standards of their
provision, universities should employ the ideas of risk and risk manage-
ment in ways that are very different from the proposals set out in the
White Paper and in recent HEFCE and QAA publications.

Trends in quality assurance

Elsewhere in this volume, Amaral has identified risk management as


one of several new approaches to quality assurance with Europe being
led, in this case, by recent developments within the United Kingdom.
He argues that risk management and other initiatives have emerged as
a means of calling universities to account against the background of a
pan-European loss of trust in institutions and the growing use of mar-
kets as instruments of public policy.
The case of the UK is particularly striking. The potential impact on
public spending of ‘massification’ (the rise in the age participation rate
converting what had been an elite into a mass system) has been miti-
gated by the ‘marketisation’ of higher education. At 0.69 per cent of
GDP, public expenditure on higher education is now the second lowest
in Europe; and the teaching grant will decline by 80 per cent over the
next three years whilst income from the fees paid by UK and overseas
students has quadrupled over a ten-year period (Universities UK, 2011).
This seismic shift in the balance between public and private investment
has been accompanied by the creation of a marketplace with competi-
tive bidding for publicly funded student places, a ‘level playing field’
for private (and often ‘for profit’) providers and the encouragement of
a ‘consumerist’ attitude on the part of students.
Ten years ago, risk-based financial regulation by the English Funding
Council was part of a ‘new deal’ for institutions: they would enjoy
‘a greater degree of freedom and flexibility’ if they could demonstrate
that they were conducting their affairs efficiently (Greaves, 2002;
Cabinet Office, 2003). The question for us is whether the recent
announcement of a risk-based approach to regulation will relax the
burden of accountability for established, high performing, ‘low risk’
and (for these reasons) trusted institutions. Or is it somehow consistent
90 Quality Assurance in Higher Education

with a secular decline in trust, strengthened accountability and the


ever-greater exposure of institutions to market (or at least quasi-market)
forces?
Before we answer these questions, it is worth considering the other
European developments in quality assurance identified by Amaral. The
first two are entirely consistent with one another and with marketisation
and the decline in trust. The classification and ranking of institutions
on the one hand, and the evaluation of learning outcomes (and other
output measures) on the other, are instances of a more general determina-
tion to secure ‘transparency’ and accountability in a competitive higher
education market. As the White Paper put it, making the system ‘more
responsive to students and employers . . . depends on access to high quality
information about different courses and institutions’ so that ‘better
informed students’ can ‘take their custom to the places offering good
value for money’ (BIS, 2011, paras 2.8 and 2.24).
The third development – enhancement – might be different. What had
been widely regarded as a particularly intrusive form of accountability –
teaching quality assessment and subject review – was replaced in 2002
by a set of ‘transitional arrangements’. Discipline-level engagements
were now to be ‘developmental’ in nature and incorporated within the
audit of institutions’ quality management systems. The arrangements
were presented as intrinsically beneficial to institutions and as a means
of minimising the burden on those that had ‘robust internal review and
reporting processes’ (QAA, 2002b). Then, in what seemed the next step
in a logical sequence, the transitional arrangements were succeeded by
a new audit methodology that placed ‘enhancement’ centre stage (QAA,
2006, 2009).2
The decade had opened with universities mounting a high profile
campaign for a ‘lighter touch’ and a reduction in the ‘burden of bureau-
cracy’. In 1999, Howard Newby, then president-elect of the Committee
of Vice Chancellors and Principals, complained about the ‘the end-
less rounds of assessment’ undergone by British institutions (Newby,
1999). This was echoed a year or two later by academic members of the
House of Lords. As Lord Norton put it, the consequences of the then
current accountability arrangements were ‘pernicious and long term’
and they threatened to ‘undermine rather than enhance the quality
of teaching’ (Hansard, 2001).3 The transitional arrangements promised
accountability of a less intrusive kind, based on the premise that respon-
sibility for assuring the quality and standards of programmes should
rest with institutions and not with QAA. And, in the period between
2006 and 2009, the Agency’s (admittedly pale) imitation of Scotland’s
Risk, Trust and Accountability 91

‘enhancement-led’ approach to institutional review could be interpreted


as a concession to the ‘attempt by universities to regain trust’, repatri-
ating the responsibility for quality to institutions themselves (Amaral,
this volume).

Reading the runes

The nature and significance of any of such developments will be


governed by the balance of power between the various stakeholders in a
higher education system. Professor Amaral has already quoted Sursock’s
point that ‘the quality assurance debate . . . is really about power: it is a
question of how quality is defined and by whom’. Or, to put the point
more generally, it is a question of who or what within the system has
the power to define the meaning of a policy, idea or concept, and to
determine the actions of institutions and their members.
The point is neatly captured by Amaral’s medieval models of qua-
lity assurance (see this volume) distinguished by the relative power of
dons, students and external authorities. This is similar to Martin Trow’s
distinction between ‘three fundamental ways in which colleges and
universities are linked to their surrounding and supporting societies’;
accountability, trust and the market. Trow suggests that the problems
facing universities and university systems are best understood in terms
of the various ways in which these forms of linkage are balanced and
combined (Trow, 1996).
For our purposes Trow’s options might be interpreted as three distinct
forms of accountability. Relatively autonomous institutions in a posi-
tion of ‘trust’ retain a peer accountability to one another and, through
their members, to their wider academic communities. What Trow terms
‘accountability’ might be redefined as a specific form of accountability
to one or more external agencies which, in European systems, are nor-
mally represented by the State. The third option – markets – is clear
enough: it entails an accountability to individual purchasers that is
secured through the operation of market forces.
Writing in 1996, Trow had described the British system as ‘something
like a command economy’ – a state-mediated quasi market in which
there is ‘the ideology of market relations . . . without markets’. Trow was
referring to a set of arrangements which applied to all institutions in the
1990s, but to which the polytechnics and colleges of higher education
had had a longer exposure. Previously, the relationship between govern-
ment and the universities was characterised by a high degree of trust,
with the independence of institutions being protected by some kind
92 Quality Assurance in Higher Education

of ‘self-denying ordinance’ on the part of the State in spite of the fact


that it was the primary source of university funds. By the mid-1990s,
however, the UK government had greatly strengthened its control over
universities and its ‘leash’ had become ‘very short indeed’.
More recently and particularly under the present government, there
has been a decisive move in a market direction. This was most clearly
signaled by the Government’s response to the Browne Report, which
had contended that ‘we should no longer think of higher education
as the provision of a public good, articulated through educational
judgement and largely financed by public funds’ (Collini, 2010). But
this move in a market direction is not incompatible with the con-
tinuing accountability of universities to the State or its surrogates.
Indeed, shortly before the 2010 general election, a Parliamentary Select
Committee had recommended the assumption by QAA of responsibility
for the assurance of academic standards, and the introduction of
a system for accrediting universities (House of Commons, 2009b,
pp. 147–9).4 And, although the government at that time decided not to
act on these recommendations, the higher education reforms set out in
the White Paper are designed to promote the market responsiveness of
institutions and to foster ‘rational’ behaviour on the part of their new con-
sumers. As Amaral has observed (this book), the marketisation of higher
education requires intervention by the State to create the conditions for
the efficient operation of the market and to mitigate its negative effects.
What, then, is the significance of the vogue for ‘enhancement’ and of
the recent proposals for ‘risk-based’ regulation? Are these developments
consistent with the trend towards state managed marketisation, or do
they signal some diversion from, or even a reversal of this trend?
Notwithstanding repeated assertions to the effect that reviews at subject
and institutional level have demonstrated the consistently high quality
and standard of university provision, it is doubtful whether ‘enhance-
ment’ reflects a more ‘trusting’ mode of engagement between QAA and
institutions. Even in Scotland, with its ‘enhancement-led’ approach to
institutional review, enhancement tends to be treated as an adjunct to
conventional forms of quality assurance: there is an emphasis on the
accountability of front line staff for improving their academic practice,
and institutions are expected to address Agency-defined ‘enhancement
themes’. Enhancement may imply a ‘value for money’ accountability
of institutions to government, or an accountability to ‘customers’ for
improving the quality of the student experience; rarely does it entail
transformational change in support of the professional commitment of
academic staff and to meet the needs of their institutions (Raban, 2007).
Risk, Trust and Accountability 93

Some have, though, interpreted the recent announcement of a


risk-based approach as evidence of a restored ‘trust’ in universities
and in their academic staff. In an article published shortly after the
release of the 2011 White Paper ‘a thoughtful minister’ was said to be
‘reversing 30 years of targets’ and the authors argued that universities
needed to ‘be careful not to misplace the trust’ that had been placed
in them. This assessment seems to have been premature: it was based
on a naïve reading of the promise of ‘deregulatory change’ and on the
misconception that ‘quality assessment’ would be ‘triggered only in
unusual circumstances (Thomas and Peim, 2011). Indeed, a year later
the Director General of the Russell Group expressed her members’
disappointment over the outcomes of the White Paper consultation.
The Funding Council, she said, had missed the opportunity to adopt a
‘light touch quality assurance regime’, with ‘considerably less inspec-
tion and bureaucracy’ for her members (Times Higher Education, 2012).
In the UK at least, the purpose of risk-based regulation is not to reduce
but to re-focus the accountability of institutions. It is consistent with
the longer-term project of reducing the costs of external review, a pro-
ject that has acquired a new urgency with the need to reduce the fiscal
deficit (HEFCE, 2005). It is also an essential component of the attempt
to develop a higher education system in which ‘the forces of competi-
tion replace the burdens of bureaucracy in driving up the quality of the
academic experience’. This would be a system that has been described as
‘more diverse, dynamic and responsive’ in which the admission of new
providers, including those from the private sector, will ‘drive reform
across the sector’ (BIS, 2011, pp. 3–5; Willetts, 2011, pp. 7–8). As the
sector becomes more diverse, ‘risk-based’ regulation would provide a
means by which QAA could ‘tailor’ external review to the ‘individual
circumstances’ of providers, focusing the effort of the Agency ‘where
it will have most impact’ (HEFCE, 2012, pp. 3–5; BIS, 2011, para 3.19;
QAA, 2013). As conceived by QAA and the Funding Council, risk-based
regulation amounts to little more than a form of triage.

Risk-based regulation

In 2011 the White Paper had announced the government’s inten-


tion to introduce an approach to regulation that would be ‘genuinely
risk-based’; the outcomes of the Funding Council’s consultation were
published in October 2012; and in January 2013 the QAA set out its
proposals for implementing the new approach (QAA, 2013). None of
these documents displays an awareness of previous or parallel ventures
94 Quality Assurance in Higher Education

in the field, they do not offer a definition of risk or an analysis of the


potential sources of risk, and there is little indication that Agency or
the Funding Council will use the new approach to manage actively the
risks it identifies.
In the 1990s risk management became a corporate governance
requirement for public sector organisations, and the principles of
risk-based regulation are now well-established in the financial services
sector. As I indicated earlier, these principles have also governed the
English Funding Council’s engagements with higher education insti-
tutions. The current debate, however, betrays a curious absence of
historical or contextual awareness. This includes a failure to consider
whether the shortcomings in the regulation of the banking industry
might have implications for the proposed risk-based regulation of
universities. Members of the Russell Group might also have done well
to heed the warnings of the House of Commons’ Regulatory Reform
Committee: ‘analysts and commentators’ were urged to ‘avoid confus-
ing risk-based regulation (with) so-called “light touch” approaches’
(House of Commons, 2009a).
Following the publication in 2001 of a good practice guide on risk
management, the Funding Council commissioned a review of emergent
practice in the higher education sector (HEFCE, 2001, 2005). Both pub-
lications offered definitions of ‘risk’ and both discussed the potential
sources of risk for higher education institutions. This is also a feature
of the various documents produced by the Financial Services Authority
(for instance FSA, 2006). By contrast, the White Paper and the consulta-
tion documents do not explain their usage of the term risk and, apart
from a passing reference to the need for ‘a quality assurance regime’ that
is ‘well-adapted’ to the ‘challenges’ presented by the reforms set out in
the White Paper, there is no recognition of the ways in which govern-
ment policy has itself made higher education a more ‘risky business’
(BIS, 2011, para 3.18). The Funding Council merely dodged these issues
when they were raised by a number of the respondents to its consulta-
tion paper (HEFCE, 2012, Annex A, paras 105–8, 112).
The proposed approach to external review may be ‘risk-based’ but it
lacks the characteristics of a risk management approach. Although ‘risk’
remains undefined, the term is used in a way that suggests that the
purpose of the new method is to enable QAA and the Funding Council
to deal in a ‘proportionate’ manner with those institutions where there
is evidence that quality, and perhaps standards, are already at risk.
The frequency and intensity of QAA review will depend mainly on an
institution’s ‘track record’ or its performance, and only incidentally on
Risk, Trust and Accountability 95

factors that might indicate that the provision offered by that institution
could be placed at risk in the future. These factors relate to the char-
acteristics of the provider or of its provision, and not to the external
‘systemic’ risks for which the regulators and their political masters
themselves bear some responsibility (FSA, 2009, p. 92). Closer scrutiny,
not remedial or supportive action, is the only intervention that is
contemplated as a consequence of an institution or its provision being
found to be at risk.5

Managing academic risk

UK universities enjoy legal autonomy, and responsibility for the man-


agement of quality and standards (and thus for the management of
academic risk) lies with institutions and not with any central agency.
Whilst it is a condition of receiving public funds that degree awarding
institutions submit to QAA oversight, the powers of the Agency are cur-
rently limited to the audit of universities’ quality assurance processes
rather than to any evaluation of the actual quality and standards of
their provision.
Given this important distinction between the responsibilities of insti-
tutions and those of QAA, it is unlikely that the Agency’s ‘risk-based’
approach will be fit for institutional purpose. If it is not the purpose of
the Agency’s approach to manage academic risks, the quality assurance
arrangements of institutions must perform this function. In fact, insti-
tutional responsibility for the management of academic risks is given
particularly strong emphasis in that part of the QAA Quality Code that
deals with collaborative provision, an area of activity that is commonly
regarded as entailing particularly high levels of risk. Universities are
exhorted to adopt a risk-based approach ‘to developing and managing
their collaborative activity’: ‘it is . . . incumbent on (them) to assess the
risks involved and manage these appropriately’6 (QAA, 2009, Chapter
B10, p. 5 and indicator 5).
If an institution is to develop a ‘risk management’ approach it needs,
first, to be clear about what it understands by the term ‘risk’. The English
Funding Council provided a useful definition in one of its earlier pub-
lications. ‘Risk’, it suggested, is ‘the threat or possibility that an action
or event will adversely or beneficially affect an organization’s ability to
achieve its objectives’ (HEFCE, 2001, para 10). This definition is helpful
because it stresses the positive (in the sense of ‘opportunity’) as well as
the negative aspects of risk, emphasising that risk management ‘is not
a process for avoiding risk’. The Funding Council’s guidance went on to
96 Quality Assurance in Higher Education

explain that ‘when used well (risk management) can actively encourage
an institution to take on activities that have a higher level of risk
because the risks have been identified and are being well managed’
(HEFCE, 2011, para 16).
The definition is notable also for the way in which its use of the
future tense suggests that the identification and assessment of risk is an
act of prediction, inviting an analysis of the possibly causal relationship
between a condition or an event (the risk) and adverse or beneficial
outcome. Rather than adopting a definition which implies a helpless
surrender to the ‘insecurities, uncertainties and loss of boundaries’ of
Ulrich Beck’s ‘risk regime’, risk management necessarily rests on the
premise that ‘“something can be done” to prevent misfortune’ (Beck,
1992, p. 70; Lupton, 1999, p. 3). In this sense, the risk manager sub-
scribes unashamedly to the positivist dictum, savoir pour prévoir, afin de
pouvoir.
The White Paper and the HEFCE consultation documents tend to
equate ‘risk’ with outcome: a high performing institution is ‘low risk’,
and one with little or no ‘track record’ is ‘high risk’. However, the FSA
and other organisations distinguish between the outcome or detriment
and the ‘risk’, which is something that has ‘the potential to cause harm’
(FSA, 2006). As a Royal Society study group put it, ‘detriment is a mea-
sure of the expected harm or loss associated with (the) adverse event’.7
Assuming that QAA defines risk in same way as the White Paper and
the Funding Council, the identification of an institution as high risk
would result in post hoc action to rehabilitate or perhaps penalise an
institution. The implication of a risk management approach, on the
other hand, is that intervention following an assessment of risk should
include preventative measures to avert the reputational damage or
under-performance that might otherwise occur if no action were to be
taken.
Recognising that risk derives from both the properties of an institution
and its external environment, I have distinguished elsewhere between
‘risk potential’ and ‘risk factors’. Risk factors, or the previously men-
tioned ‘systemic risks’, are the many and various events that could occur
in the future, and the identification and predictability of these events
is contingent on our knowledge of the particular environments within
which we operate. Such events simply happen, but whether and how
they impact on an institution is a function of ‘risk potential’. This term
describes certain conditions or qualities – strengths and weaknesses –
that are inherent in an institution and its provision. Risk potential is the
institution’s capacity to exploit opportunities or to counter the threats
Risk, Trust and Accountability 97

that may arise when an event occurs.8 It determines whether detriment


(or gain) is likely to ensue from exposure to a risk factor.
Managing risk requires first that the institution has, and continues
to have, a full understanding of its operating environment which,
for a department or faculty, will include the host institution itself.9
So, in addition to some assessment of the risks inherent in a depart-
ment and its provision (its risk potential), the management of academic
risk entails annual monitoring and review procedures that are ‘forward
looking’ and which perform a reconnaissance function, capturing the
intelligence on external threats and opportunities that is brought by
‘front line’ staff. This is important if institutions are to move beyond the
management and control of known risks to address the ‘new risks’ that
are likely to emerge in ‘a fast changing environment’ and where ‘past
experience is an uncertain or potentially misleading guide’ (Walker,
2009, paras 6.5–6).
The management of academic risks also entails the assessment of the
competence of operating units (programme teams, departments, facul-
ties and institutions) in dealing with the risks to which they are actually
or potentially exposed. In the past, QAA audits have provided a means
of assessing institutional competence in managing risk. In the tacit
knowledge of the risks that beset the Sector, reports concluded with a
judgement of confidence based upon an evaluation of the institution’s
‘capacity and commitment to identify and address any situation that
has the potential to threaten the quality of programmes and/or the
standard of awards’ (QAA, 2009, Annex E).
The approach described in this section implies that any intervention
to manage academic risks should include the support and not just the
closer scrutiny of a potentially vulnerable unit or, if necessary, of a unit
that is already at risk. Referring specifically to external review processes,
a report of the European Universities Association argued that quality
assurance processes ‘will become effective only if (they are) forward
looking and orientated toward building and developing institutions
rather than simply inspecting them’ (EUA, 2005). In similar vein, the
later report of the QAHECA project invited both external agencies and
institutions ‘to commit to a developmental approach’, one that would
‘enhance institutions’ capacity to change’ (EUA, 2009).
Support for operating units complements the value placed on the
intelligence brought to an institution by its front line (academic and sup-
port) staff. This is a point that was made by the PricewaterhouseCoopers
(PwC) review of the English Funding Council’s guidance on risk man-
agement. The report found that the institutions that benefit most from
98 Quality Assurance in Higher Education

risk management are those that understand it to be a two-way process:


‘a way of feeding information up through the institution and providing
support and targeting resources where they are most needed’. Risk
management should not, it emphasised, ‘solely be associated with
accountability to the governing body (and to senior management and
stakeholders)’ (HEFCE, 2005, p. 7).
Which brings us back to Martin Trow. Each of his three forms of insti-
tutional accountability has its counterpart in the internal relationships
between staff and their managers. ‘Trust’ implies ‘donnish dominion’ –
institutional autonomy and the accountability of tenured academics to
their peers (Halsey, 1995). Accountability to the State and subjection to
market forces have their counterparts at institutional level in manage-
rialism and consumerism. The point made by the PwC report implies
both a degree of trust in front line staff and a ‘collegial’ tempering of
the managerialist approach to accountability that is becoming prevalent
in our universities. It is also consistent with Sue Wright’s argument
that ‘future systems of quality management and accountability [should
build on] staff’s professional values and their aspirations for improve-
ment’ (Wright, 2003, p. 1). She suggests that if university managers
were to ‘tap into [the] well of good ideas in departments and enable
them to flow productively through the institution’ they would discover
‘how staff’s current practices and aspirations for improvement are
constrained by . . . the university’s administrative systems and manage-
ment policies’. Wright is, in effect, commending a ‘dialogic’ approach to
accountability, one that ‘[takes] on board the perspective from below . . .
[which] would result in a very different dynamic for institutional
improvement than that likely to ensue from . . . top down imposition’
(ibid., p. 7).

Accountability and trust

Trow has argued that accountability can be ‘the enemy of effective


governance’: ‘the more severe and detailed are accountability obligations,
the less can they reveal the underlying realities for which universities
are being held accountable’. The problem applies irrespective of the
application of risk management principles, and it is equally relevant
to a university’s external communications and, internally, to the com-
munications between staff and their managers. In all cases the issue
is ‘how to create a system of accountability that does not punish
truth-telling and (merely) reward the appearance of achievement’
(Trow, 1996, p. 6).
Risk, Trust and Accountability 99

In the UK, many regard quality assurance as a bureaucratic imposition,


a costly and time-wasting distraction from the real business of teaching
and research. Institutions and their academic staff are likely to resist
the requirements of any quality assurance system that they regard as
an instrument of management control. From Trow’s perspective, such
systems secure ‘accountability in name only’. Compliance with the
system ‘resembles the reports by a civil service in a defeated country to
an occupying power, or by state-owned industrial plants and farms to
central government in a command economy’ (ibid.). It is all-too-often
the case that the annual monitoring reports produced by academic staff
display similar qualities.
If it is to be effective, the ‘governance’ of an institution’s academic
standards is necessarily a collegial process. The assurance of the stand-
ards set by staff and achieved by students requires the professional
academic judgement of relevant subject specialists drawn from both
within the institution and from the wider academic community. Expert
academic judgement (exercised through an institution’s quality assu-
rance processes) is also required if valid decisions are to be made on the
appropriateness and quality of students’ learning opportunities. Whilst
assessments of the quality of the student experience might be based on
the feedback provided by students themselves, an evaluation of the
quality of learning opportunities cannot be made without drawing upon
the experience and expertise of academic staff.
So a quality management system that fails to win the active and
willing engagement of academic staff is likely to be ineffective, if not
counter-productive. If the system is to move beyond assurance to
enhancement its procedures will need to harness the creativity and com-
mitment of staff, and they will need to feel a real sense of ‘ownership’
of these procedures. Without this, the best an institution can hope to
achieve is the identification of some examples of local good practice –
examples which might well be ignored, or perhaps contested, by staff
working on other programmes or in other faculties. Academics are past
masters of the ‘not invented here syndrome’!
Like ‘risk’, ‘enhancement’ is one of those over-worked words in the
quality lexicon. It can mean various things. Usually it is confined to
securing incremental improvements in academic practice. Staff creativity
and commitment, and the credibility in their eyes of their institutions’
quality management systems, become more important if those sys-
tems are to promote enhancement in the sense of ‘innovation’. But,
as Saunders (present volume) reminds us, enhancement might also
entail transformational change – a ‘re-think of existing approaches,
100 Quality Assurance in Higher Education

even fundamental purposes, and completely new practices’, extending


beyond the realm of academic practice to include changes in the manage-
ment and other arrangements that exist for the purpose of supporting
front line staff.
If enhancement is to promote transformational change, our quality
assurance procedures need to do more than perform surveillance and
supervisory functions. Equally, if our institutions are to develop the
capacity to manage academic risks our procedures need to be capable of
reconnaissance and of securing mutual accountability. They have to be
used and valued as a means by which teaching staff can alert the rest of
the institution to their needs and, if necessary, call central departments
and senior colleagues to account for the decisions that they make and
which have an impact on the institution’s core business.

Spitting in the wind?

The hyperbole that accompanied the launch of QAA’s risk-based


approach masks the prosaic nature of this aspect of the government’s
higher education reforms. Whilst it is possible that for some institu-
tions it will bring a marginal reduction in the bureaucratic burden, it
will not support them in managing what Neave has called the increas-
ingly ‘risky business’ of higher education (present volume), nor will it
represent a significant advance in the promotion of enhancement. In
effect, it amounts to little more than a system of triage in which under-
performing institutions will be selected for closer scrutiny.
I have argued that universities would be ill-advised to emulate the
government’s approach in developing their own quality assurance
arrangements. Their responsibilities, and thus the purposes served
by their internal arrangements, are different. They, not QAA or the
Funding Councils, are responsible for securing the standards of their
provision and the quality of students’ learning opportunities and they
need, therefore, to develop the capacity to identify, assesses and manage
future risks rather than merely using some measure of past performance
to determine how and how often the work of a programme team or
department should be scrutinised.
The difference is a matter of the relationship between universities and
their surrounding societies and, within universities, between students,
staff and their managers. Although risk-based regulation reinforces a
move in a market direction without weakening the accountability of
universities to the State, a risk management approach to internal quality
assurance would imply some element of trust – an empowerment of
Risk, Trust and Accountability 101

front line staff and a relationship of mutual accountability between


front line staff and their managers. The student contribution would
be no less important but their involvement in the identification and
management of risk should be that of responsible members of the insti-
tution, and not as its customers.
The University of Derby was one of a small number of UK institutions
that had experimented with a risk management approach to quality
assurance. At its 2009 institutional audit, ‘the successful operation of
the University’s quality assurance and quality enhancement processes’
was said to be underpinned by ‘a culture of inclusivity, openness and
self-reflection’ (QAA, 2010). This was a significant finding because it
reflected the more general point that the approach that I have described
will only be effective if staff are encouraged and rewarded for being
frank in their reports, and where it is accepted that responsibility for
‘at risk’ provision is shared between all parties.
As we have learned from the banking crisis, successful risk manage-
ment presupposes that institutions do not have a ‘cultural indisposition
to challenge’. In the written evidence he submitted to the House of
Commons’ Treasury Committee, Paul Moore (the former Head of Group
Regulatory Risk at Halifax Bank of Scotland) commented: ‘openness to
challenge is a critical cultural necessity for good risk management and
compliance – it is in fact more important than any framework or set
of processes’ (House of Commons, 2009c).10 One might add that the
acceptance of failure is another cultural requirement: as Michael Power
has argued, ‘given the emphasis being placed on the importance of
innovation to economic growth and prosperity, it might even be said
that some failure is necessary’ (Power, 2004, p. 22). The implication for
regulation was not lost on the QAHECA project team. Their report to
the European Universities Association concluded that ‘external quality
assurance should aim at checking if an HEI is capable of reacting to
abnormal circumstances rather than sanctioning occasional failures’
(EUA, 2009).
It remains to be seen whether QAA’s new risk-based process will be so
forgiving, and this raises an important issue concerning the feasibility
of maintaining an institutional culture that might be at variance with
the culture of the sector. A recent survey of occupational stress (includ-
ing harassment, anger and bullying) found that the level of well-being
in British universities was below that of the working population as a
whole and that it had fallen over a four-year period (University and
College Union, 2012). And ten years earlier, a government report
had concluded that ‘the apparent lack of trust between Government
102 Quality Assurance in Higher Education

and higher education institutions (HEIs) . . . seems to permeate some


HEIs’ internal systems, resulting in a lack of trust between HEIs’ own
quality assurance teams and their academic staff’ (Better Regulation
Task Force, 2002, para 7.8).
A risk management approach, possibly any form of internal quality
assurance, will not work unless there is a culture that accepts risk,
encourages staff to disclose risks and which is open to the frank
exchange of information and ideas on the management of risk. It would
be difficult to sustain such a culture if staff were to believe that frank
disclosure would leave their institutions exposed in the face of external
scrutiny. In this respect, recent developments in external review meth-
ods do not augur well.
Institutional Review has been described as providing ‘a strong foun-
dation’ for the new risk-based approach (HEFCE, 2012, para 17). With
its graded judgements, including a judgement of standards against
‘threshold’ criteria, this is a method that is more inspectorial than its
predecessors. QAA, through its review teams, has become the arbiter of
‘national expectations’ that are set out in the Quality Code and which
‘all providers of UK higher education are required to meet’ (QAA website,
my emphasis). The method and its language signal a sea change in the
relationship between the Agency and formerly autonomous institu-
tions. In these circumstances universities may well struggle to develop
an approach to quality assurance that requires some measure of trust in
the professionalism of its front line staff.

Notes
1. Throughout this chapter I have used ‘regulation’ when referring to the
responsibilities of the Quality Assurance Agency, and I have reserved the
term ‘quality assurance’ to refer to the internal arrangements of universities
in securing the quality and standards of their programmes.
2. The reference here is to arrangements for the audit of institutions in England
and Northern Ireland.
3. For a discussion of this debate, see C. Raban and E. Turner (2005), pp. 26–9.
4. Because British universities are legally autonomous and hold their degree
awarding powers by Statute or by Royal Charter, any suggestion that they
should be subject to a system of accreditation would be very controversial.
Equally contentious was the suggestion that QAA should be responsible
for the assurance of academic standards since this had hitherto had been
regarded as the inalienable responsibility of universities themselves.
5. For further discussion of the points made in this paragraph see Raban (2011).
6. This, indeed, was a tacit expectation when QAA audit teams made a judge-
ment of confidence ‘in the soundness of [an institution’s] present and
Risk, Trust and Accountability 103

likely future management of the academic standards of its awards [and of]
the quality of learning opportunities available to students’ (QAA, 2009).
Confidence judgements are not a feature of the current review method for
England and Northern Ireland.
7. Royal Society for the Prevention of Accidents (1983) Risk assessment: a study
group report (London: Royal Societies). Quoted in Adams (1995), p. 8.
8. The concept of risk potential is also discussed in C. Raban (2008).
9. For further discussion of the approaches described in this and the following
paragraphs, see C. Raban and E. Turner (2006) and (2005).
10. See also the Treasury Committee Written Evidence, Part 3, February 2009.

References
Adams, J. (1995) Risk (London: Routledge).
Beck, U. (1992) Risk Society: Towards a New Modernity (London: Sage).
Better Regulation Task Force (2002) Higher Education: Easing the Burden, Cabinet
Office, July.
BIS (2011) Higher Education: Students at the Heart of the System (London:
Department for Business, Innovation and Skills).
Browne, J. (2010) Securing a Sustainable Future for Higher Education, http://dera.ioe.
ac.uk/11444/1/10-1208-securing-sustainable-higher-education-browne-report.
pdf (accessed 22 September 2013).
Collini, C. (2010) ‘Browne’s gamble’, London Review of Books, 32(21), 23–25.
Cabinet Office (2003) Lambert Review of Business-University Collaboration (London:
HMSO).
EUA (2005) Developing an Internal Quality Culture in European Universities (Brussels:
European Universities Association).
EUA (2009) Improving Quality, Enhancing Creativity. Final report of the QAHECA
project (Brussels: European Universities Association).
FSA (2006) The FSA’s Risk-Assessment Framework (London: Financial Services
Authority).
FSA (2009) The Turner Review: A Regulatory Response to the Banking Crisis
(London: Financial Services Authority), http://www.fsa.gov.uk/pubs/other/
turner_review.pdf (accessed 22 September 2013)
Greaves, P. (2002, September) Address to the Higher Education Forum (London:
HEFCE).
Halsey, A.H. (1995) Decline of Donnish Dominion: The British Academic Profession
in the Twentieth Century (Oxford: Oxford University Press).
Hansard (2001) House of Lords Debate on Universities, Vol. 623 cc 1467–98,
21 March. (House of Commons).
HEFCE (2000) Accounts Direction to Higher Education Institutions, Circular letter
number 24/00: Bristol, November.
HEFCE (2001) Risk Management: A Good Practice Guide for Higher Education
Institutions (May 01/28), (Bristol: HEFCE).
HEFCE (2005) Risk Management in Higher Education. A guide to good prac-
tice prepared for HEFCE by PricewaterhouseCoopers, http://dera.ioe.
ac.uk/5600/1/05_11.pdf (accessed 22 September 2013).
104 Quality Assurance in Higher Education

HEFCE (2012) A Risk-Based Approach to Quality Assurance: Outcomes of Consultation


and Next Steps, (Bristol: HEFCE).
House of Commons (2009a) Ninth Report: Themes and Trends in Regulatory Reform,
Regulatory Reform Committee (London: Stationery Office) July.
House of Commons (2009b) Students and Universities: Report of the Innovation,
Universities, Science and Skills Committee (London: Stationery Office).
House of Commons (2009c) Banking Crisis: Dealing with the Failure of the UK
Banks, Treasury Committee, April.
JM Consulting Ltd (2005) The Costs and Benefits of External Review of Quality
Assurance in Higher Education. A report to HEFCE, Universities UK, SCOP, the
DfES and the Quality Assurance Framework Review Group, July.
Lupton, D. (1999) Risk (London and New York: Routledge).
Newby, N. (1999) New Investment in Higher Education is Vital for the Knowledge
Economy. Keynote address at CVCP Annual Meeting, University of
Wolverhampton.
PricewaterhouseCoopers (2005) Good Practice Guidance for the Higher Education
Sector: Risk Management (Bristol: HEFCE), December.
M. Power (2005) The Risk Management of Everything: Rethinking the Politics of
Uncertainty (London: Demos).
QAA (2002a) External Review Process for Higher Education in England: Operational
Description (Gloucester: Quality Assurance Agency for Higher Education),
March.
QAA (2002b) Arrangements during the Transitional Period 2002–05 for Higher
Education Institutions in England, (Gloucester: Quality Assurance Agency for
Higher Education).
QAA (2006) Handbook for Institutional Audit: England and Northern Ireland,
(Gloucester: Quality Assurance Agency for Higher Education).
QAA (2010) Institutional Audit: University of Derby. November, (Gloucester:
Quality Assurance Agency for Higher Education).
QAA (2011) Employer-responsive Provision Survey: A Reflective Report, (Gloucester:
Quality Assurance Agency for Higher Education).
QAA (2012) UK Quality Code for Higher Education, Chapter B10: Managing Higher
Education Provision with Others, (Gloucester: Quality Assurance Agency for
Higher Education).
QAA (2013) Higher Education Review: A Handbook for Higher Education Providers,
(Gloucester: Quality Assurance Agency for Higher Education).
Raban, C. (2007) ‘Assurance versus enhancement: Less is more?’, Journal of
Further and Higher Education, 31(1), 77–85.
Raban, C. (2008) ‘Partnership, prudence and the management of risk’, in
K. Clarke (ed.), Quality in Partnership (Council of Validating Universities/Open
University), (Milton Keynes).
Raban, C. (2011) Risk and Regulation (Gloucester: Quality Assurance agency for
Higher Education).
Raban, C. and L. Turner (2005) Managing Academic Risk: The Final Report of the
HEFCE Good Management Project on Quality Risk Management in Higher Education
(Ormskirk: Edge Hill). Raban, C. and Turner, L. (2006) ‘Quality risk manage-
ment: Modernising the architecture of quality assurance’, Perspectives: Policy
and Practice in Higher Education, 10(2), 39–44.
Risk, Trust and Accountability 105

Thomas, G. and Peim, N. (2011) ‘In ourselves we trust’, Times Higher Education,
14 July.
Times Higher Education (2012) ‘Elite embittered as HEFCE decides not to risk
calling time on audits’, 1 November.
Trow, M. (1996) Trust, markets and accountability in higher education: a com-
parative perspective, Higher Education Policy, 9(4), 309–24.
Universities UK (2011) Higher Education in Facts and Figures, Summer, London:
Universities UK.
University and College Union (2012) 2012 Occupational Stress Survey, http://www.
ucu.org.uk/media/pdf/l/n/ucu_occstress12_hedemands_full (accessed 6.12.13).
Walker, D. (2009) A Review of Corporate Governance in UK Banks and Other Financial
Industry Entities (London: HM Treasury).
Willetts, D. (2011) Address to the Universities UK Spring Conference, February.
Wright, S. (2002) ‘Enhancing the quality of teaching in universities: through
coercive managerialism or organisational democracy?’ in Jary, D. (ed.),
Benchmarking and Quality Management (Birmingham: C-SAP Publications),
pp. 115–42.
7
Risk Management: Implementation
Anthony McClaran

The UK quality assurance agency for higher education

The UK Quality Assurance Agency for Higher Education (QAA) was


established in 1997. Its remit was – and remains – to assure standards
and improve the quality of UK higher education, which has a world-
class reputation. The Agency has its headquarters in Gloucester, with
offices in London, Glasgow and Cardiff. It employs around 170 staff,
with an additional pool of approximately 600 expert reviewers.
Today, QAA’s main activities include:

• Conducting reviews and audits of institutions, including universi-


ties, further education colleges and alternative providers of higher
education.
• Advising the UK’s Privy Council (a branch of government) on appli-
cations for Degree-Awarding Powers and University Title.
• Reviews of Educational Oversight: since 2012, colleges and other
institutions in the UK wishing to recruit international students have
been required by the UK Border Agency to have ‘Highly Trusted
Sponsor’ status. In order to become a Highly Trusted Sponsor, institu-
tions must be reviewed by one of the UK Border Agency’s nominated
bodies – a process known as Educational Oversight. QAA reviews
those providers that offer mainly higher education, specifically
alternative providers such as private, for-profit and charitable status
institutions.
• Maintaining, developing and publishing the UK Quality Code for
Higher Education, which sets out the expectations that all providers
of UK higher education are required to meet.

106
Risk Management: Implementation 107

Higher education in the UK

QAA is a national UK Agency covering England, Scotland, Wales and


Northern Ireland, within a higher education system that is ‘devolved’.
This means that each of the four countries has power over its own
higher education system – in Northern Ireland, this is the Northern
Ireland Assembly; in Scotland, the Scottish Government; in Wales, the
Welsh Government; and in England, the UK Parliament.
Providers of higher education in the UK are quality assured under
a common framework, with appropriate variations of practice – rather
than principle – in each country. The planned move to a more risk-
based approach to quality assurance, however, currently applies only
to England.

UK policy background

The move towards a more risk-based approach to quality assurance in


England was defined in 2011, in two significant government papers.
In June 2011, the government White Paper, Students at the Heart of the
System (BIS, Department for Business, Innovation & Skills, 2011a), set
out the government’s proposed reforms for the higher education sector,
primarily in three key areas:

(i) Putting higher education on a sustainable footing – moving public


spending away from teaching grants and towards repayable tuition
loans.
(ii) A better student experience – improving teaching, assessment,
feedback and preparation for the world of work. Risk-based quality
assurance was included within this.
(iii) Increasing social mobility – ensuring that those with the highest
academic potential should have a route into higher education and
into the most selective institutions, in particular.

The White Paper was followed, in August 2011, by a Technical


Consultation, A Fit-for-Purpose Regulatory Framework for the Higher
Education Sector (BIS, 2011b). This consultation with the UK higher edu-
cation sector examined what changes in procedures, powers and duties
were required to implement the reforms proposed in the earlier White
Paper, including risk-based quality assurance.
108 Quality Assurance in Higher Education

The Technical Consultation explicitly set out the policy objectives for
the proposed move to a more risk-based approach:

. . . our proposals [are] for a diverse and responsive sector, and a


proportionate, risk-based approach to regulation which protects
and promotes the interests of students and taxpayers, while keeping
bureaucracy to a minimum and looking to find areas of regulation
that can be improved, reduced or removed (BIS, 2011b, p. 4, para 3).
We will look to remove, improve or simplify regulation where pos-
sible and will move to a risk-based approach to regulation which will
reduce burdens on universities whilst still safeguarding students and
maintaining confidence in English higher education (BIS, 2011b,
p. 5, para 8).

Developing a more risk-based approach to quality


assurance in England

The Technical Consultation closed in October 2011 and – after some


delays – the government finally published its response (BIS, 2012) to the
2011 papers and consultations in June 2012.
A number of the government’s reforms proposed in 2011 were
suspended, including a new Higher Education Bill that would have
provided a legislative framework for change. The government remained
committed, however, to introducing a more risk-based approach to
quality assurance in England.
The government response stated:

The consultation has reinforced our view that a risk-based approach


is the most desirable means of regulating higher education in
England . . . a system which continues to promote enhancement,
which remains robust and rigorous, and in which students will
continue to play a prominent role. We believe this more targeted,
responsive model, including triggers that could indicate possible
issues for investigation, will provide improved assurance for students
and others (BIS, 2012, p. 40, §§ 3.7–3.8).

Sector consultation
Following the government’s response, the steps to implementation
moved forward with a detailed consultation with the UK higher educa-
tion sector in the summer of 2012, led by HEFCE, the Higher Education
Funding Council for England. QAA acted as expert adviser to HEFCE
Risk Management: Implementation 109

during the development of the consultation document, which was


entitled A Risk-Based Approach to Quality Assurance (Higher Education
Funding Council for England, 2012a). The consultation opened in May
2012 and closed at the end of July 2012.
The HEFCE consultation document placed emphasis on making exter-
nal review proportionate to an institution’s proven track record.

Our intention is to move to a lighter-touch approach where track


record and type of provision warrant such a change. We propose
that our guidance to the Quality Assurance Agency (QAA), following
the consultation, will direct attention and effort where it will have
the most benefit in the development, enhancement and protection
of quality and standards (Higher Education Funding Council for
England, 2012a, p. 2).

The consultation proposed changes to the nature, frequency and inten-


sity of QAA’s engagement with institutions. Any change, however, was
underpinned by three key principles.
First, that a universal system of quality assurance would be retained
for higher education providers, which continued to promote enhance-
ment, and was based on continuous improvement and the effective
dissemination of best practice.
Second, that any new approach to be adopted would be robust and
rigorous, enabling HEFCE to carry out its statutory duty to secure assess-
ments of quality for higher education providers that have access to
public funding.
Third, that students would continue to play a prominent role in
assessing their own academic experiences – something which the UK’s
National Union of Students had already been working on with a number
of agencies, alongside improvements by higher education institutions
to student representation and engagement with quality assurance.
The HEFCE consultation focused on four key questions:

(i) How would higher education providers’ engagement with the quality
assurance system vary in nature, frequency and/or intensity,
depending on their track record on quality assurance and the profile
of their provision?
(ii) How would higher education providers undergo review? For
instance, might a model be adopted of a ‘core’ institutional review
and additional institutional review ‘modules’ (on collaborative
provision, for example, if offered in their portfolios)?
110 Quality Assurance in Higher Education

(iii) Should QAA investigate the possible reduction or streamlining of its


engagement during review with those providers which have a sub-
stantial proportion of their provision accredited by professional,
statutory and regulatory bodies (PSRBs), and if so, how?
(iv) How would HEFCE put in place a more rigorous and robust process
for instigating ‘out-of-cycle’ QAA investigations, when concerns
about quality and standards arise between formal reviews?

The QAA viewpoint


In considering the questions raised by the 2012 HEFCE consultation, it
is worth reflecting on the strongly deregulatory momentum which had
already been achieved in English higher education over the previous
fifteen years: from the removal of subject review in 2001 and discipline
trails in 2005, to the progressive lengthening of the cycle of review since
2005, and greater proportionality in the processes for documentation
and institutional visits in the reform of the institutional review method
in 2010.
A complementary QAA Concerns Scheme for the investigation of par-
ticular instances of threats to quality or standards was also introduced
and strengthened over the same period. This scheme has enhanced
QAA’s capacity to intervene in the event of serious concerns about
quality and standards in institutions. QAA investigates concerns about
standards and quality raised  by students, staff or other parties. Where
such  concerns indicate serious systemic or procedural problems, QAA
will conduct a detailed investigation. It also publishes reports publicly,
following some investigations.
In its own response to the consultation, QAA broadly welcomed the
move towards a more risk-based quality assurance system, applying
greater scrutiny where it is most needed. At the same time, the interests
of students must always be of paramount concern in developing any
risk-based approach to quality assurance. This is consistent with both
the government’s 2011 White Paper and Aim 1 of QAA’s own Strategy
(2011–14): ‘To meet students’ needs and be valued by them’ (QAA,
2011). It remains essential that any change to the system must retain
the full confidence of students, the sector, the public and the interna-
tional community in the robustness and reliability of English higher
education’s quality assurance, in the context of the wider UK system of
which it is a part.
QAA also welcomed the consultation proposals which reaffirmed the
importance of the universal external review of institutions. Without
this, it would become problematic to compare judgements reached
Risk Management: Implementation 111

about the quality of higher education provision in England with other


parts of the UK or, indeed, with other parts of the world.
The consultation’s strong commitment to a concept of quality assur-
ance that included enhancement was also important. The consulta-
tion set out an ambition not only to retain enhancement, but also to
strengthen it. Indeed, any system of quality assurance that did not include
enhancement would be a more impoverished one, focusing only on the
achievement of regulatory thresholds and the identification of problems,
and not on good practice, improvement and the pursuit of excellence.
The proposal to retain and further develop the current method of
institutional review was also significant, and recognised the work
already undertaken by QAA, the higher education sector and other
stakeholders. The new method of institutional review began in the
2011–12 academic year and was introduced after extensive consulta-
tion. Retaining this method also kept peer review at the heart of English
external quality assurance, together with advances such as a much more
prominent role for students, and a strong focus on the quality of the
information produced for the public, students and those with responsi-
bility for academic standards and quality.
QAA endorsed the consultation’s recognition of the wider UK and
international context for quality assurance. It remains important that
developments in England move forward in the context of the UK-wide
quality assurance system within which they will sit; for example, in the
context of the UK Quality Code and its implementation across all four
UK countries. It is also important to consider the proposed changes in
the light of the international standing of the UK higher education sec-
tor and the part that external review by QAA plays in that reputation.

Next steps for implementation


A provisional timetable for implementation of a more risk-based
approach quality assurance in England is detailed in Table 7.1.

Risk-based quality assurance within the broader


regulatory landscape
It is also important to remember that the proposed move to a more risk-
based approach to quality assurance in England is taking place within a
broader, changing higher education policy context.
In recognition of this, a new group composed of UK higher educa-
tion sector agencies was established in September 2011 to oversee the
transition to the new regulatory arrangements for higher education in
England. The purpose of the new group – the Regulatory Partnership
112 Quality Assurance in Higher Education

Table 7.1 Provisional timetable for implementation of risk-based quality assurance


in England

Date Activity

October 2012 HEFCE publishes response to the consultation


November–December QAA drafts a new Operational Description and
2012 Handbook for the revised review method
January–April 2013 QAA conducts a consultation with the higher
education sector on its draft Operational Description
and Handbook
May 2013 Handbook published
Summer 2013 Institutional briefings begin on the revised method
onwards Reviewer training begins
Autumn 2013 Implementation of the new method
Early 2014 First reviews begin, under the new method
2014–15 Independent evaluation

Group – is to advise government, HEFCE and other national agencies


on policy and strategic and operational issues arising from the develop-
ment of the new funding and regulatory regime for higher education.
Jointly chaired by HEFCE and the Student Loans Company, the
group’s other members include representatives from QAA, the Higher
Education Statistical Agency, the Office for Fair Access and the Office of
the Independent Adjudicator.
The Regulatory Partnership Group has designed a work programme
to implement the planned changes to funding and regulation in the
sector. The programme has four main elements:

(i) Developing a new operating framework that will set out the roles,
responsibilities, relationships and accountabilities of the various organ-
isations involved in the regulation of higher education in England.
(ii) Developing the successor to HEFCE’s Financial Memorandum,
which will reflect the changing landscape of higher education
funding, and the accountabilities of higher education providers.
(iii) Redesigning the data and information landscape – a project to
enhance the arrangements for the collection, sharing and dissemi-
nation of data and information about the higher education system.
(iv) Investigating constitutions and corporate forms – an analysis of
the changing corporate forms of higher education providers and
the implications of this for the interests of students and the wider
public, and the reputation of the UK’s higher education system.
Risk Management: Implementation 113

The aspiration is that these projects should provide a clearer


understanding of the changing landscape of provision in higher edu-
cation in England and the wider UK, safeguarding its reputation and
quality for the future.

Afterword

This chapter was presented at the A3ES & CIPES conference, ‘Recent
Trends in Quality Assurance’ in Porto, Portugal on 12 October 2012.
Subsequently, in late October 2012, HEFCE published the outcomes
of its consultation with the higher education sector in A Risk-Based
Approach to Quality Assurance: outcomes of consultation and next steps
(Higher Education Funding Council for England, 2012b).
There were 130 responses to the consultation, which showed wide
support on a range of key issues. In particular, respondents supported
the proposal to build on the existing method of institutional review
as the basis for a more risk-based approach to quality assurance, with
its clearer judgements, focus on risk and reduced bureaucratic burden
compared with previous methods. Alongside this was an emphasis from
respondents on ensuring that enhancement remains a core dimension
of English quality assurance, and continuing to involve students fully
in the quality assurance process as partners in assessing and improving
the quality of their own higher education.
There was also broad support for reducing unnecessary burden and
achieving better regulation by targeting QAA’s efforts where they are
most needed, and for increasing transparency about reviews and the
rolling review programme. Respondents also welcomed the proposal to
tailor external review to the individual circumstances of providers (as
opposed to a ‘one size fits all’ approach).
In summary, this would be a transparent, proportionate and more
risk-based approach to quality assurance that ensures that the interests
of students continue to be promoted and protected.
Some of the main outcomes of the consultation were:

(i) The majority of respondents indicated that the period between


reviews should be no more than six years. As such, there will be
a six-year review cycle for those institutions with a longer track
record of successfully assuring quality and standards (the pre-
requisite will be two or more external institution-wide reviews).
There will be a four-year review cycle for those providers with a
shorter track record.
114 Quality Assurance in Higher Education

(ii) The HEFCE report also proposes greater transparency through the
publication of a rolling programme of reviews on the QAA website.
This would clearly indicate when a provider’s next review is due to
take place.
(iii) Reviews will be more tailored to suit the circumstances of indivi-
dual providers (for instance, by adjusting the frequency, nature
and intensity of reviews). This will enable QAA to focus efforts
where they will have the most impact.
(iv) Under the new approach, there will be a single review visit and no
separate reviews of different types of provision at a single institu-
tion. For instance, there will no longer be a separate review of
collaborative provision. QAA will tailor the review to the institu-
tion’s provision, varying the number of days of the review visit and
number of reviewers as appropriate.
(v) There will be an end to mid-cycle review. Quality and standards
will be effectively safeguarded between reviews through QAA’s
Concerns Scheme and QAA will focus on further raising awareness
of the scheme, in particular through student organisations. In addi-
tion, there are other mechanisms (for example, action plans) which
follow up any action required by an institution after a review.
(vi) Students will continue to be at the heart of the process, in part by
keeping the review cycle to a maximum of six years, enabling their
input to be considered at least as frequently as it is in the current
cycle. The Student Written Submission will also continue to be a
central part of the review process. QAA will also continue to pro-
mote the role of students in quality assurance and enhancement
activities – in addition to its wider work of student engagement.
(vii) Many respondents supported streamlining the review activity of
QAA and professional, statutory and regulatory bodies (PSRBs).
However, it was acknowledged that PSRBs’ review processes and
those of the QAA do not produce comparable information, as
PSRBs focus on subject-level accreditation whilst QAA focuses on
institution-wide management of standards and quality. There is
also a lower level of student engagement with PSRB accreditation.
HEFCE has asked QAA to make further progress in this area, in
particular, through the further development of individual agree-
ments with PSRBs.

On 28 January 2013, QAA launched its consultation (QAA, 2013) on


the new method, Higher Education Review, which will be run from
Risk Management: Implementation 115

2013–14. It has been confirmed that this will also operate in Northern
Ireland. Higher Education Review will succeed two existing methods:
Institutional Review in England and Northern Ireland (IRENI) and
Review of College Higher Education (RCHE).
The overall aim of Higher Education Review remains to inform students
and the wider public whether a provider meets the expectations of the
higher education sector for:

• The setting and/or maintenance of academic standards;


• The provision of learning opportunities;
• The provision of information; and
• The enhancement of the quality of its higher education provision.

However, these fundamental purposes will be achieved through a more


risk-based methodology. In order to achieve this, Higher Education
Review will involve a two-stage process. The first stage is called Initial
Appraisal and will determine the intensity of the second stage of the
process, the review visit.
As with its predecessors, Higher Education Review will be carried
out by peer reviewers – staff and students from other higher education
providers in the UK and internationally – against the expectations for
higher education provision set out in the UK Quality Code. However, the
composition and size of review teams will be determined by the analysis
undertaken in Initial Appraisal, which may identify a requirement for
particular emphases or areas of reviewer expertise.
Students remain at the heart of Higher Education Review. They will
continue to be full members of QAA’s peer review teams. There will also
continue to be opportunities for the provider’s students to take part in
the review, through the student written submission, meeting the review
team during their visit, working with their providers in response to
review outcomes and acting as lead student representatives.
The QAA consultation focuses on the draft Handbook for the new
method, exploring areas including the new Initial Appraisal stage, the
proposed pilot introduction of international reviewers, judgements
about ‘Managing Higher Education with Others’, determining review
intensity and future evaluation of the new method.
The consultation closed on 22 April 2013. This allowed time for
implementation – following preparation and training – to begin in the
2013–14 academic year, with the first reviews to take place in early 2014.
116 Quality Assurance in Higher Education

References
BIS, Department for Business, Innovation & Skills (2011a) Students at the Heart
of the System (London: The Stationery Office). http://bis.gov.uk/assets/biscore/
higher-education/docs/h/11-944-higher-education-students-at-heart-of-
system.pdf (accessed 22 September 2013).
BIS, Department for Business, Innovation & Skills (2011b) A Fit-for-Purpose
Regulatory Framework for Higher Education: Technical Consultation (London:
The Stationery Office). http://www.bis.gov.uk/assets/biscore/higher-education/
docs/n/11-1114-new-regulatory-framework-for-higher-education-consulta-
tion.pdf (accessed 22 September 2013).
BIS, Department for Business, Innovation & Skills (2012) Government Response
to Consultations on: Students at the Heart of the System: A New Fit-for-Purpose
Regulatory Framework for Higher Education (London: The Stationery Office).
http://www.bis.gov.uk/assets/biscore/higher-education/docs/g/12-890-gov-
ernment-response-students-and-regulatory-framework-higher-education.pdf
(accessed 22 September 2013).
Higher Education Funding Council for England (HEFCE) (2012a) A Risk-Based
Approach to Quality Assurance: Consultation. http://www.hefce.ac.uk/pubs/
year/2012/201211/ (accessed 22 September 2013).
Higher Education Funding Council for England (HEFCE) (2012b) A Risk-Based
Approach to Quality Assurance: Outcomes of Consultation and Next Steps. http://
www.hefce.ac.uk/pubs/year/2012/201227/ (accessed 22 September 2013).
QAA (2013) QAA consultation on Higher Education Review. http://www.
qaa.ac.uk/Newsroom/Consultations/Pages/Higher-Education-Review.aspx
(accessed 22 September 2013).
8
Quality Enhancement:
An Overview of Lessons
from the Scottish Experience
Murray Saunders

Introduction

The focus for this chapter is the experience of a policy intervention


across a whole university system in Scotland aimed at enhancing learn-
ing, teaching and assessment. This overview is based on the evaluative
research of the policy over an eight-year period. The policy has some
unique characteristics that were rooted in an emerging HE sector identity,
intentionally nurtured and encouraged as part of a devolved educational
and social policy culture.

Distinctiveness

From its inception in 2003, the Quality Enhancement Framework (QEF),


coordinated by the Scottish Funding Council with the participation of the
Scottish universities themselves, attempted an integrated approach in which
‘enhancement’ rather than ‘assurance’ was emphasised in its approach to
the quality of university teaching and learning. This approach was wel-
comed by the sector as an improvement on the previous, assurance-based
engagement between the Scottish universities and their national sponsors.
The distinctive policy dimensions or policy mechanisms (listed
below) involve the rebalancing of practices and systems associated with
quality so as to put far more emphasis on enhancing and improving
practice and experience rather than checking and reviewing for external
accountability. In essence it is only dimension one in the following list
which incorporates an external assessment of quality (and that incorpo-
rates student representatives as well as practicing academics). The other
dimensions are clearly orientated towards mechanisms which involve
resources for improvement and participation.

117
118 Quality Assurance in Higher Education

1. ELIR: Enhancement Led Institutional Review (external estimations


of institutional quality processes by mixed external review teams);
2. Internal review processes (an institutionally based, self-diagnostic
process involving both staff and students);
3. Student involvement (a range of participatory activities in which
students are helped to participate in developmental processes and
decision making about their own experience);
4. Enhancement themes (activities involving the development of a
range of resources aimed at enrichment, alternatives, new frame-
works in targeted areas, for example employability, flexible learning,
the student experience); and
5. New approaches to public information (engaging and using com-
municative devices that inform external audiences of university
achievements) (SFC, 2012).

In the QEF we therefore have a complex policy instrument designed


to shift practices to embrace enhancement rather than assurance, as
the driving force to improve the quality of teaching and learning in
Scottish Higher Education. It is characterised by some distinctive policy
mechanisms that embody an inventive expression of a national higher
education system. The practices which challenge more assurance-based
systems include those that:

• balance enabling mechanisms with compliance to quality standards;


• shift the emphasis between, rather than the mutual exclusivity of
assurance and enhancement;
• enhance the student experience in higher education (that is, sup-
porting practices associated with improvement, being innovative, being
enabled through resources and a positive ‘climate’);
• focus on partnerships between agencies and stakeholders;
• embody a theory of educational change that places more weight on
consensual approaches than more coercive stances; and
• move away from top-down compliance-inducing processes to participa-
tive and critical supported self-evaluation.

The QEF aspired to make a clear break with the emphasis of previous
(assurance-based) quality approaches within the Scottish system and
still prevalent in other parts of the UK, and associated, in the eyes of
the HE sector at least, with the role of the Quality Assurance Agency
(QAA1). It would be a mistake, however, to imply an oppositional
Quality Enhancement 119

relationship between the aspirations of the new framework and the


QAA in Scotland. The QAA was fully incorporated, even if sometimes
uncomfortably, into the new initiative as members of all key steering
groups. The QEF is a distinctive creation of the sector and sponsoring
agencies. There is a sense that it is owned by the higher education
‘community’, or at least by senior education managers. We can consider
enhancement in three ways:

• Incrementalism: doing the same only a little better, in other words


improving on existing practice clusters. Improving the quality of
teaching materials might be an example.
• Innovative incrementalism: addition of innovations to existing
practices, for example adding an international dimension to a syl-
labus where none existed before, or a new teaching practice to a
repertoire.
• Transformational: radical understanding of enhancement involving
a re-think of existing approaches, even fundamental purposes, and
completely new practices.

The Scottish approach had traces of all three dimensions across and
within the institutions, thus the evaluations of the mechanisms suggest
they have had uneven effects. Overall, however, the combination of a
more developmental approach to institutional review, greater student
involvement, a focus on teaching and learning themes and responsive-
ness to feedback and experience has resulted in a step-change in the
way quality processes are understood and practised within the sector.
However, the significance of the step-change differs according to the
stake-holding group, as this overview of the evaluation of the policy
will show. Despite this caveat, and given the traditional and sometimes
fierce resistance to central initiatives in higher education within the UK,
particularly in the teaching and learning domain, the trajectory of the
QEF has broad legitimacy in the sector as a whole.
In terms of the critical differences between an enhancement- as
opposed to an assurance-led approach to quality processes, we have in
the Scottish case an interesting attempt to integrate legitimate sectorial
concerns with standards and cross institutional comparisons (via the
periodic external reviews ELIR)2 and the initiation of processes designed
to provide frameworks for action and resources for improvement and
development. It is the integrative approach, with an emphasis on deve-
lopment, which sets the case apart.
120 Quality Assurance in Higher Education

A note on the data base and the evaluative


research approach

The social practice theory which underpinned the evaluative research


focuses on clusters of recurrent practices (ways of thinking, feeling and
behaving), seeing these as drawing on generally available reservoirs
of practices reinterpreted in unique situated repertoires (see Saunders,
Trowler and Bamber, 2011, for details of this approach). As an evaluative
research approach to the QEF in Scotland it enabled the team to:

• emphasise the way policy experience is embedded in specific national


and local contexts, often with unintended or unanticipated effects;
• suggest a focus on dimensions of practice consisting of symbolic
structures and orders of meaning in particular places;
• conceptualise practices as using implicit, tacit or unconscious knowl-
edge as well as explicit knowledge resources; and
• focus on the routine and recurrent practices that result from a policy
intervention.

The evaluative research was conducted in two waves, the first from 2003
to 2006 and the second from 2007 to 2011. The focus was collabora-
tively derived with the SFC and involved eight national quantitative
surveys (students, student representatives, institutional student repre-
sentatives, middle managers, managers with a ‘quality brief’ and front
line lecturers). The database also included two waves of in-depth case
studies, structured interviews (of approximately 800 individuals includ-
ing national key informants) and the analysis of secondary data from all
20 universities in Scotland. The output from the research took the form
of eight reports to the SFC.3

The QEF theory of change: consensual development

As a ‘theory in action’, the QEF rested on a cultural and sectorial analy-


sis that attempted to set itself apart from an overly managerial approach
to quality management and development and build on a strong sense
of appropriateness, pragmatism and collegiality. The Scottish higher
education system is of a size that allows the formation of a higher
education ‘community’ (some 20 institutions). Whilst appreciating
distinctiveness, rivalries and differences, evaluations have suggested a
relatively high degree of collaboration and discussion amongst Scottish
higher education institutions.
Quality Enhancement 121

When we look at the strategies the QEF has embodied, we can see
that there have been some ‘change theory’ themes running through the
approach. These themes are based on an understanding of the higher
education sector in Scotland, of the kind of ‘entity’ it was, and how
it might respond to the thrust of the broad approach to quality being
promoted.
Most importantly, unlike many policies or programmes, the QEF in
Scotland has had a built-in implementation reality that set it apart from
its international neighbours. The policy was an interesting hybrid of ideas
of and from the sector itself, from analogous experience elsewhere and a
good knowledge of the current research and evaluative literature. That is
to say, it drew on ideas and influences from far and wide, but there were
strong local influences that gave it a distinctive ‘Scottishness’. This means
that in any turf war over legitimation or credibility, the promoters of
policy could (and have) drawn attention to the fact that the main archi-
tects were from the sector itself. This is not to say that there is such a
thing as a homogeneous higher education ‘sector’ in Scotland; there are
many and – as in any national university sector – rather contrasting expe-
riences and priorities, but the aspirations and interests in the approach
were, in an important sense, known and shared.
The most obvious strategy or change theory theme within the QEF
was to use existing expertise in Scotland, informed by international
experience, to create a Scottish solution. This characteristic has been an
important part of the ‘uniqueness’ of the QEF as a policy ‘instrument’
and has been a core dimension of the way in which the approach has
been ‘home-grown’, managed and developed. This enabled a familiarity,
an ownership and a legitimation that other forms of implementation
strategy might find hard to emulate. We term this a theory of ‘consen-
sual development’.
From the start of the QEF there was an awareness that disgruntle-
ment with quality assurance processes, which was quite common in
the UK (see Saunders, 2009, p. 93), and the wish to do something dif-
ferent, was no guarantee that a feasible and better approach could be
created. However, in Scotland there was the priceless advantage that the
self-governing system comprised just 20 higher education institutions.
This made it possible to assemble a distinctively Scottish alternative to
current quality assurance practices. Since control of higher education
was located with the Scottish Assembly (now the Scottish Government)
and since there was considerable interest amongst officials and agencies
in the creation of a distinctively Scottish approach to quality, the scene
was set for new thinking.
122 Quality Assurance in Higher Education

QEF brought to the fore the simple and powerful idea that the
purpose of quality systems in higher education is to improve student
experiences and, consequently, their learning. Distinctively, the QEF
has had a commitment to:

• students and the continuing enhancement of their learning in


higher education;
• partnerships between agencies (such as the Scottish Funding Council,
the Quality Assurance Agency and the Higher Education Academy),
higher education institutions (as can be seen by the formation of
the Scottish Higher Education Enhancement Committee, a self-
organising operation with a continued commitment to working QEF
through the system), and other stakeholders (most distinctively seen
in the active involvement of students and student bodies in QEF);
• a theory of educational change that placed far more weight on con-
sensual approaches than on the more coercive stances embedded in
some quality assurance regimes. The approach emerged from serious
discussion and thinking;
• a culture shift – away from top-down compliance-inducing processes
to participative and critical supported self-evaluation; away from
audit and towards improvement; away from ruffling the surface of
higher education practices and towards permeating the system with
practices compatible with the QEF; away from metrics-driven judge-
ments of achievement and towards more sensitive forms of evidence
of cultural change;
• reflexivity, in the sense of exposing QEF itself to evaluation from the
very beginning.

An interesting question refers to the degree to which stakeholders in


Scotland will make QEF central to the further development of a distinc-
tively Scottish approach to higher education in general and to organic,
self-sustaining approaches to teaching, learning, assessment and other
aspects of curriculum.

Change ‘on the ground’

The permeation of a new approach means a shift in day-to-day prac-


tices, using different knowledge resources within a different set of
priorities. It may well be the case that the step-change is beginning to
enrich day-to-day practices but complex change of this sort cannot be
reduced to a simple or easily identifiable line of determination. We refer
Quality Enhancement 123

here to quality of the humdrum, daily practice-based kind, which is yet


to be proved a prominent feature. Epidemiological or environmental
metaphors of ‘contagion’ or ‘climate change’ might be more apt in
this kind of context. It is pertinent that the QEF depends on creating
‘affordances’, which is to say a general climate and specific environmen-
tal features that are sympathetic to proposed changes and allowing the
situated creativity of teaching groups, departments, schools and institu-
tions to make of it what they will.
We do know from the evaluations (Saunders et al., 2006), however,
that senior managers were positive and accepted the legitimacy of the
QEF. Those working with a specific brief to support teaching and learning
are similarly positive; middle managers are overall aware of the approach
and the qualitative data suggests many are seeing some positive effects.
The student experience suggests overall an inclusive and productive
relationship with departments in which they consider their voice is
heard and, more importantly, acted upon. The evaluations also suggest
that, in general, students believe that they are having positive teaching
and learning experiences.
There is a gap in our understanding (apart from the experiential pro-
fessional intuitions we all share) of how engagement with an enhance-
ment led culture of teaching and learning at the ‘front line’ might
be encouraged. Evaluations point to some ‘disablers’: the competing
demands made on university teachers; uncertain rewards for a commit-
ment to teaching; a view that the front line already knows how to teach
creatively; and the uncertain relationship between the enhancement
themes and daily practices. QEF should continue to consider ways in
which teachers may be supported in enhancing their everyday practices.

Shared visions

The QEF is premised on an approach to change that attempted to build


on a collegiate and shared vision. This raises the issue of the extent to
which it genuinely reflected or expressed a ‘Scottish way’. The research
suggests that while broad identification with this approach existed within
the universities, the devil was in the detail and several key ‘tensions’ were
occluded in the name of consensus. In enhancement theory, do these
inevitable tensions build to become pathological to the aims of the policy
or are they accommodated and thus co-exist in a form of mutual adapta-
tion? The research suggests that the idea of a shared vision was important
in the QEF’s ‘genesis’ (that is, where the policy came from) in that it
was based on an agreement that assurance-based approaches to quality
124 Quality Assurance in Higher Education

were disliked and that a more positive, integrated and enhancement-led


approach would galvanise support and reflect a Scottish way. However, this
consensus concealed tensions. Among them were issues associated with a
move too far from an assurance-based approach. Such an approach might
lack ‘teeth’ and allow poor practice to continue. Other tensions concerned
what would happen to the policy on the ground in the light of assurance
enhancement proceeding hand in glove. Would it produce compliant
behaviour, for example, where themes were targeted in the belief that they
alone would bear the brunt of external review.
The issue here is not that we must continue to search for the ‘perfect’
policy – such a thing doesn’t exit – but to develop our understanding
of what happens when certain change strategies are used. In this case,
do the underlying tensions tend to destabilise consensus and produce a
dysfunctional implementation process? The argument suggested here is
that the shared vision of participants in policy production can be sus-
tained on the ground through ‘layered’ communities of designers and
users embedded within the institutions themselves.

Embedding innovative and interesting practice

A consistent problem in change processes, particularly those that


involve enhancement, concerns the mechanisms for moving from the
interesting case to changing what we might term ‘routine practice’.
These cases are embodiments of this problem. In particular, the extent
to which proposed changes go ‘against’ the grain of systemic incentives
for action or run up against existing material and routine constraints. In
this issue, we have an example of what we might call ‘over-determined’
alignment. By this we mean that a change suggested by an enhance-
ment strategy may be aligned at the level of rhetorical discourse (shared
national vision) but misaligned at the level of routine practices on the
ground or the power of existing sectoral incentives for practice (for
example, sustaining the reputation and prestige of a research intensive
university). A variety of mechanisms were used. One was broadly based
on a resource dependency model whereby systems of small grants were
offered by the learning and teaching support unit to encourage projects
or the exploitation of research findings. It is interesting to note the way
in which this strategy of ‘categorical funding’, as it is known, creates
‘enclaves’ of interesting or excellent practice and how such instances
can then be used to create positive effects in the wider case.
The idea of an enclave is useful here in that it implies the way in
which a teaching and learning project might stimulate interesting and
exciting changes and improvements in those directly involved in the
Quality Enhancement 125

project or in receipt of resources. But how is the wider case influenced?


In effect we have a weak theory of change here because this connection
is rarely addressed beyond exhortations to disseminate. This of course
begs the question of how wider practices might be enhanced on the
basis of an embodiment in an interesting case. It might be useful to
adjust our discourse to refer directly to the relationship between the
embodiment and wider practice by introducing the term ‘engagement’.
By this we refer to how groups and individuals across and without the
institution might connect in such a way that their practice shifts to
include the characteristics of the embodiment.
Having said this, however, the targeted funding programme might
be repositioning teaching and learning practices at institutional levels
in ways that were difficult to determine or predict precisely. It may, for
example, create ‘ripple’ effects through recognition that are creating a
shift in the way teaching and learning practitioners are being rewarded
and resources and attention might be flowing toward teaching and
learning. The model suggests that at the outset of the programme, excel-
lent practice was essentially taking place in pockets within institutions
and it is these pockets of practice that have been rewarded with funds.
The key question is how this process encourages engagement.
Because we use the metaphor of an ‘enclave’ to depict these pockets of
interesting practice the issue of how to encourage or sustain the process
of deepening or widening effects is critical. Our cases suggest it involves
the principles, ways, means and approaches that constitute practice
moving out from these pockets or enclaves of practice and influencing
the wider case through a series of ‘engagement’.
There are further sets of considerations in moving from the interest-
ing case or embodiment to wider practice that is based on the idea of
misalignment. To reiterate, this idea focuses on the extent to which a
change idea or the embodiment of a new practice connects in some way
with the dominant mode of practice or set of practices.
These cases suggest that practices might not only be shaped by quite
mundane yet critical factors such as timing of sessions but also depend
on the target group’s judgement of the congruence of the embodiment
with their own ‘learning styles’ or with their own judgement of a cost/
benefit balance.

Incentives for change

The evaluations suggest how resources are used as a lever to ‘incentivise’


the enhancement process. The question of ‘incentives to engage’ in
change is an interesting one and can be interpreted as a focus on what
126 Quality Assurance in Higher Education

it is that enables or ‘persuades’ individuals, groups and organisations


to shift their practice or indeed to adhere to one practice rather than
another. The most straightforward form of incentive is of course the
promise of new ‘resources’ if the proposed change is enacted. However,
there are many other ways of interpreting the idea of incentives. For
example, we might consider moral or ethical incentives to change,
based on a professional imperative or a set of values about the quality
of teaching that are being constrained or somehow compromised in
the present circumstances. A proposed change might incentivise by
providing a mechanism to express beliefs or values through new teach-
ing methods.
I note above the way in which an enhancement change is often
supported by resources. Normally this categorically funded approach
involves targeted funding on an approved or sought-after policy goal.
In this case we have identified the QEF interest in particular areas in
which the criteria for successful bids might constrain them to target
their innovative teaching redesigns to meet aspects of the enhancement
themes. The data suggests that there was alignment between the value
placed on these themes and on estimations of value at the ground level
within a specific discipline. Moving from enclave to wider practice was
not analysed in detail but the process of developing the projects and the
congruence/alignment of the central ideas with core curricular issues
has generated widespread discussion.
This targeted approach also suggested some interesting unintended
effects that were created by the bidding system. It is clear that those
in direct receipt of funds benefitted. This is congruent with the issue
of enclaves I mention above; however, there are other dimensions of
this process of grant success, some positive and one other not so. The
positive dimension is that those in receipt of funds enter what might
be called a positive cycle in that they are usually able to build on their
initial experience and become more proficient at further bids, thus
creating a strong corpus of innovative work. At the same time the
involvement of postgraduate students in the projects further deepened
the effects.
The down-side of this process, however, lies in the uneven develop-
ment of capacity, not only in a teaching and learning area but also in
the process of bidding itself. Funded projects were often restricted to
a relatively small group of staff who were successful in seeking fund-
ing. Individuals involved in engaging with the interesting teaching
and learning strategies were also restricted to the ‘usual suspects’ of
‘extended professional’ in the teaching and learning domain.
Quality Enhancement 127

Low fidelity and reconstruction

The evaluations found examples of the way a policy struggles with


prescription (high fidelity) and openness (low fidelity) with differing
and concomitant effects on the process of change. The QEF was, at the
same time both high and low in fidelity. It was low in fidelity in that it
attempted a change in perspective and emphasis on quality from assur-
ance to enhancement but allowed institutions of HE to embody this shift
in such a way that they expressed their own institutional culture and
systems. On the other hand, it identified themes that were expected to
form the main emphasis across the sector (for example, employability)
irrespective of the priorities that an individual institution might want to
emphasise. Nothing would stop an institution doing so of course but it
would be very difficult to draw down extra funding. We found that the
lower the policy is in fidelity, the better it travels and is reconstructed
on the ground and is capable of being adapted to the ‘real politique’ of
situated change. On the other hand, of course, low fidelity change
means that there can be little standardisation or control over the detail
of the changes across a whole system. This is anathema to policy-makers,
who tend to eschew too much local versions of change because it makes
policy look incoherent at the national level.
The evaluations suggest that in one critical area of the QEF approach
(the themes), the fact that the policy of enhancement is relatively low
in fidelity allows departments and individuals to situate their own
expressions of the policy in institutional realities. While the themes
were restricted, there was not a high level of disagreement about the
authenticity or legitimacy of their focus. Where criticisms did arise,
they focused on the lack of prior discussion or from whence the themes
emerged; there were no substantive objections to the theme focus.
The data suggest how institutions and departments were able to inter-
pret the ground level expressions of a theme in terms of an institutional
sense of what was required for specific students or as a way of enhancing
existing practice or what connected effectively with new professional
requirements. One case study (Fisher, 2009, p. 88) of the way in which
the QEF framework enabled the impetus for enhancement within the
Art, Design and Media domains illustrates the power of legitimacy for a
shift in emphasis from assurance to enhancement.
The capacity of a new policy to be reconstructed at ‘ground’ level
depends on the relationship between fidelity and the extent to which
the policy can be reconstructed and translated. In general terms, low
fidelity means the extent to which the vision or thrust is clear but
128 Quality Assurance in Higher Education

generic enough for ‘enactors’ to situate it in their own situated contexts


of practices, circumstances and priorities. It also implies, and the QEF
was adept at this, the need to build upon a level of consensus and
‘ownership’ of the policy shift such that practitioners were confident
enough to modify and adapt the policy messages to local circumstances.
A more bureaucratic or centrally prescriptive approach inevitably
reduces local adaptations or produces rhetorical rather than practice-
based change. It can also result in strategic conduct that makes sure
that all the ‘surface’ characteristics of a shift are present, but this overlay
disguises few changes in practice. It can mean that practitioners actively
subvert or undermine central prescription. If ground level adaptation or
expressions of HE policy are a good thing, then we should encourage
policy to be clear and strong in broad vision, grow out of consensus and
eschew detailed prescription.

Futures

I suggest that the positive moves to connect the various UK-wide


opportunities for support in Scotland have increased the possibilities
of a joined-up approach, while still embodying a ‘Scottish’ feel, and
have gone some way to address some of the ‘dislocations’ identified
by some informants. Among the possibilities suggested in evaluations
of the QEF (Saunders et al., 2006) has been the idea of subject ‘nodes’
within Scotland, still within the overall framework but building on geo-
graphical accessibility and interchange between groups of practitioners
that arise from HEA events or activities within Scotland. This type of
approach builds positively on what is being done UK-wide but acts as
an ‘affordance’ for further local developments.
It is easy to be seduced by the warm glow suggested by a discourse
of consensus in Scotland. What our research suggests is that stakehold-
ers’ experience of complex interventions of this kind varies greatly
depending on where they sit within the system. We argue that a locus
of consideration for change in HE should always circulate around how
departments respond to internal and external dynamics, pressures and
influences. This level of negotiation and accommodation and, again,
variety of results, refers to the capacity of departments to adapt and
reinterpret. It remains high and is the key unit of influence or medi-
ating force for individual practice. The QEF (as a policy instrument),
through its various mechanisms which are mediated by institutional
and departmental processes, is so played out in terms of individual
experience.
Quality Enhancement 129

As I note in the introduction to this chapter, my research draws on


the experience of a policy intervention aimed at enhancement across
the whole Scottish university system.
The QEF policy was derived as part of an emerging HE sector identity
which was self-consciously encouraged as an element of a devolved
social policy culture. The QEF itself pursued an integrated approach that
emphasised ‘enhancement’ rather than ‘assurance’ in its approach to the
quality of university teaching and learning. Overall, the approach was
understood as a positive departure from assurance-based engagement
between the Scottish universities and the UK-based national framework.
However, we see that the approach to change via the QEF was complex,
involving several areas of tension and challenges to implementation.
As I note, the QEF is a complex policy instrument designed to shift a
culture to embrace enhancement rather than assurance. It is characte-
rised by some distinctive policy mechanisms that embody an inventive
expression of a national higher education system.
The evaluative research suggests the way in which broad brush
policies like the QEF raised several interrelated issues in connection to
change initiatives and implementation realities. As one of our respon-
dents said:

I think the basic argument still stands, which is that enhancement


has to be, to some extent, about adventurousness, about taking
risks, about trying things. Not in a hazardous way, but safeguarding
standards across the university, and of course the academic progress
of students, not imperilling them. But, often the people, who end up
on Quality Assurance Committees, are by nature, if you like, cautious
and defensive, and they like to stick to the rules (senior manager pre-
1992 university).

In summarising the greatest challenges for sustaining collegiality in qua-


lity within the Scottish approach we have the following considerations:

• Having the professional courage to avoid playing safe and avoiding


risk (antithesis of pedagogic confidence);
• Challenging increased central control and bureaucratisation, which
diminishes the professional space for innovation;
• Continuing with an agenda that encourages practices of student
engagement-as-learning and engagement-as-representation;
• Addressing the myth that student learning culture has become pre-
dominantly instrumental rather than transformational; and
130 Quality Assurance in Higher Education

• Being aware of the countervailing pressures to collegiality of systemic


evaluative practices like league tables and the concomitant intensifi-
cation in work practices of academics (time, pressure points)

To use the well-known SWOT analysis as a way of summarising the


present ‘policy health’ of the QEF, we might have the following profile:

Strengths: the approach has created ‘buy-in’ by academics at all level;


it has high legitimacy in the sector; it has attempted an embedded
culture of improvement and there is evidence that it has improved
the quality of the student experience.
Weaknesses: from a sector management point of view, the approach
has produced a lack of consistency in quality across the sector and
there is less standardisation. There is a strong rhetoric which has not
been matched by ground-level changes in practice.
Opportunities: it offers the possibility of sector-wide cultural change
that emphasises improvement in and support of teaching and learn-
ing practices; it is embedding an accountability culture based on
changes in practice rather than changes in ‘texts’ and symbolic rep-
resentations of systems.
Threats: there is a systemic tendency to retreat to assurance away from
enhancement (this is where the balance returns to control and manage-
ment from the centre); there may be a lack of political courage in the
face of international league tables; the possibility of the introduction
of externally derived targets to determine the direction of funding in
a period of contraction (‘when the going gets tough, risk gets going’).

Notes
1. The QAA asserts on its website that it is ‘our job is to safeguard quality and
standards in UK universities and colleges, so that students have the best pos-
sible learning experience’ http://www.qaa.ac.uk/Pages/default.aspx.
2. The indicators for which have been derived consensually by Scottish
Universities (through the Scottish Higher Education Enhancement Committee)
via a partnership with the QAA (Quality Assurance Agency) Scotland.
3. See http://www.sfc.ac.uk/reports_publications/reports_publications.aspx?
Search=QEF%20evaluation&Type=Reports%20and%20publications&
Sector=-1&From=dd/mm/yyyy&To=dd/mm/yyyy.

References
Fisher, G. (2009) ‘Exchange and Art: interdisciplinary learning’, in V. Bamber,
P. Trowler, M. Saunders and P. Knight (eds), Enhancing Learning, Teaching,
Quality Enhancement 131

Assessment and Curriculum in Higher Education: Theory, Cases, Practices


(Buckingham: Open University Press).
Saunders, M. (2006) ‘The presence of evaluation theory and practice in educa-
tional and social development: toward an inclusive approach’, London Review
of Education, 4(2), 197–215.
Saunders, M., Trowler, P., Machell, J., Williams, S., Lent, N., Spencer, A. and
Knight, P. (2006) Enhancing the Quality of Teaching and Learning in Scottish
Universities: The Final Report of the First Evaluation of the Quality Enhancement
Framework to the Scottish Funding Council’s Quality Enhancement Framework
Evaluation Steering Committee (SFC, Edinburgh), http://www.sfc.ac.uk/informa-
tion/info_learning.htm. (accessed November 2012).
Saunders, M. (2009) ‘The Scottish way: A distinctive approach to enhancement:
introduction’, in V. Bamber, P. Trowler, M. Saunders and P. Knight (eds),
Enhancing Learning, Teaching, Assessment and Curriculum in Higher Education:
Theory, Cases, Practices (Buckingham: Open University Press), pp. 57–63.
Saunders, M., Trowler, P. and Bamber, V. (2011) Reconceptualising Evaluative
Practices in Higher Education: The Practice Turn (London: McGraw-Hill/Open
University Press).
Scottish Funding Council (2012) The Quality Enhancement Framework in Scotland,
http://www.qaa.ac.uk/SCOTLAND/ABOUTUS/Pages/Quality-enhancement-
framework-in-Scotland.aspx (accessed 22 September 2013).
Part III
Regional Setting
9
European Trends in Quality
Assurance: New Agendas beyond
the Search for Convergence?
Bjørn Stensaker

Introduction

While external quality assurance (EQA) can be seen as one of the most
visible results of European integration through the Bologna process in
the last few decades, new developments might question whether the
field of quality assurance is actually driven by the search for conver-
gence at the European level. This chapter identifies some current trends
in EQA, indicates possible implications, and discusses whether EQA is
at a critical stage in its developmental phase.
In the last few decades EQA has fulfilled various functions in higher
education (see Westerheijden et al., 1994). It has played an important
role in guarding quality when new providers enter higher education,
has provided useful information about quality to different stakehold-
ers in the sector, including governments and students, and not least
has played an important role in stimulating quality improvement in
education and training in general (Brennan and Shah, 2000). Of course,
the function of EQA in various European regions and countries differs
considerably (Rosa et al., 2006). In some, EQA has played an impor-
tant role as a regulative tool ensuring quality in deregulated and more
market-driven systems (Dill and Beerkens, 2010). In other regions and
countries, perhaps where institutions have already established their
own systems of quality assurance, EQA has played a role more related
to the development of these systems.
At the European level, a key ambition has still been that EQA should be
conducted in a way that would make regional and national differences
of lesser importance, and where the degree of convergence between
different EQA systems was sufficient to foster trust and mutual recogni-
tion within the European higher education area (Westerheijden, 2001).

135
136 Quality Assurance in Higher Education

European policy developments such as the establishment of the European


Standards and Guidelines (ESG), and the umbrella organisations sur-
rounding EQA at the European level, are key indications of this ambi-
tion (Stensaker et al., 2010). ENQA, the national quality assurance
agencies’ own interest organisation, has been a key actor in supporting
the spread of the ESG. EQAR, the European register for quality assurance
agencies, is also using the ESG as a vital element for allowing agencies
to be listed in the register. On this basis, one could argue that there has
been, and still is, a strong drive for increasing convergence between
various national EQA systems. However, since the Bologna process
seems to have lost some of its attractiveness, at least in many western
European countries, there are signs that national policy initiatives can
drive EQA into new directions and pathways beyond the search for
European convergence.

A brief overview of the historical development


of EQA in Europe

There are a number of contributions that in a comprehensive and


detailed way have described the establishment and development of EQA
at the European level (see, for example, Schwarz and Westerheijden,
2004; Westerheijden et al., 2007), and the idea is not to provide another
overview here. However, to provide some context to more recent deve-
lopments it is useful to remind ourselves of the key developmental
stages in EQA, and of what EQA has delivered with respect to results
and outcomes.
A starting point that is important to remember is that EQA in Europe
was first established at the/a national level in some pioneering countries
(UK, France, The Netherlands and Denmark) before experimenta-
tion with EQA began at the European level during the mid-1990s
(Westerheijden et al., 1994). At this point, the idea was to drive policy
learning between countries, not least concerning methodologies and
their application across geographical borders. The Bologna process can
be said to have co-opted the ongoing EQA activities for realising the
aim of increased mobility and mutual recognition of degrees and stu-
dents. Through various ministerial meetings during the 2000s, EQA was
high on the political agenda, and as a result the ESG was established in
2005, thus establishing a common European framework for how quality
assurance should be undertaken – at both the/an institutional and QA
agency level. In this process, the interest organisation of the national
EQA agencies (ENQA) played an important role – both regarding the
European Trends in Quality Assurance 137

developing of the ESG, for paving the way for the establishment of the
establishment of systems for EQA in all European countries, and for the
establishment of new agencies with a specific responsibility for run-
ning such systems. With the later establishment of EQAR – a register
for quality assurance agencies operating in Europe – one could argue
that the European higher education sector is becoming more similar
to other sectors in society that have experienced increasing regulatory
attempts from the European level (see, for example, Levi-Faur, 2011). As
part of this development, one could argue that national governments
have lost power and influence domestically as new agencies have had
European standards to subscribe to and where they, as a consequence,
have become more autonomous from the governments that created
them. One could also argue that national governments have lost power
at a European level as increasing professionalisation and bureaucratisa-
tion of the whole field of EQA (see, for example, Stensaker, 2003), has
driven the ‘politics’ out of quality assurance discussions for the sake of
routines, checks and balances. The question is whether this trend will
continue, or whether we are witnessing signs of a changing context
surrounding EQA within Europe.

Perspectives on recent developments in EQA

European developments within EQA could be analysed from a number


of approaches. However, quality assurance has recently been linked
strongly to governance, regulatory matters and public policy analysis
(Westerheijden et al., 2007; Dill and Beerkens, 2010; Stensaker and
Harvey, 2011), an approach which is also pursued in the current paper.
Of course, EQA is not an exemplary case for applying such a perspec-
tive, as the possibilities to impose binding regulations, especially within
higher education, are fairly absent at the European level. Hence, various
voluntary approaches are the only way forward. The establishment of
voluntary standards as witnessed in EQA can, from this perspective,
be seen as a form of governance without government (Brunsson et al.,
2000), and as a way to steer behaviour through organisational measures
(Power, 2007).
Still, while a governance framework may be contested at the European
level, one could argue that this approach is very relevant to use at the
national level, not least since most EQA systems and agencies were
created by governments and their activities are also regulated by national
laws. The consequence is that national governments may still have
considerable influence over EQA domestically – and that the influence
138 Quality Assurance in Higher Education

of European developments should not be exaggerated. In a period where


the various economic and social challenges are prominent in Europe,
and where the ability to agree on joint action at the European level seem
to be more limited, new room to manoeuvre can also be opened up for
national governments that face expectations and challenges domesti-
cally. Not least it is possible to argue that ideas about the creation of
knowledge economies are still dominant and that market-inspired and
entrepreneurial initiatives are frequently mentioned and identified
as domestic solutions to global challenges (Olssen and Peters, 2005).
While particular concepts such as New Public Management may be
gone, there seem to be few alternative solutions available when political
initiatives are to be decided upon. Based on this, one could identify a
number of perspectives through which EQA might be contextualised.

EQA in an effectiveness perspective


For national governments, EQA may be conceived as an instrument
that was invented to solve particular problems (see also Westerheijden,
1999). These problems may be related to a variety of issues: privatisa-
tion, massification, de-regulation and so on (Dill and Beerkens, 2010).
Not least it is possible to find examples in a number of countries of
the link between political ambitions of improving the quality of edu-
cational programs and provisions, and the aims and objectives of EQA.
Whether EQA actually has been able to deliver on the latter dimension
is more of an open question. Has EQA really ‘solved the problem’? There
is much evidence that EQA has had an effect on the professionalisation
of institutions, it has also provided the public with more information
on higher education, and that EQA is driving institutional centralisa-
tion. However, there are fewer studies showing a clear link between EQA
and improvement in teaching and learning (Stensaker, 2003). Here, one
could question whether EQA in its current form is an effective govern-
ment tool. Are there other governmental instruments that could be
used instead, or that could be combined with EQA?

QA in an efficiency perspective
For national governments EQA is also an issue that can be analysed
from an efficiency perspective. Any governmental measures imply using
resources, time and energy to deal with various political issues (Dill and
Beerkens, 2010). From a governmental point of view the costs associated
with the development of new instruments must – perhaps especially
when resources are scarce – be related to the potential benefits the
instruments create. From this perspective, the relevant questions are
European Trends in Quality Assurance 139

whether the ‘problem’ is important enough to be given governmental


attention, or whether there might be ways to make the instrument
leaner and smoother (see also Massy, 2003). Especially at a time when
the economic outlook is not certain in Europe, governments might be
seen as an area where public spending can be cut, or at least where crea-
tive solutions to saving time and resources can be tested.

QA in an accountability perspective
National governments do have interests beyond solving effectiveness
and efficiency issues, especially when such issues are tricky to solve,
and where there is a need for the government to demonstrate that it is
still on top of the situation (Fisher, 2004). Hence, in the audit society it
is not only those that are exposed to reform that must be accountable;
the same also goes for governments (Stensaker and Harvey, 2011). From
this perspective, reform can in itself be seen as a form of accountability.
For example, governmental changes in EQA could be caused by the
need to copy the practices and systems that are seen as innovative or
popular – regardless of whether these are effective or efficient (see also
Westerheijden et al., 2014). Change becomes a sign that something is
being done, and that those in power are taking their responsibilities
seriously.
The three perspectives are not mutually exclusive. One can imagine
combinations of policy initiatives in EQA that may have effects on
effectiveness, efficiency and even in improving accountability at the
same time. What the three perspectives do have in common is a strong
link to change, and that absence of change would suggest that none of
the three perspectives are very relevant to explaining the development
of EQA in Europe. Here, we should now turn to the current realities sur-
rounding EQA in Europe.

Recent policy developments in EQA in Europe

A recent survey undertaken by ENQA (2012) suggests that 60 per cent


of European agencies are planning to introduce major changes in their
approaches and practices related to EQA. While this percentage might
suggest that EQA is facing a major overhaul one should bear in mind
that EQA in Europe has for quite some time been in a state of flux. For
example, a similar survey conducted by ENQA in 2008 revealed that
an even greater percentage of agencies (75 per cent) at that time were
planning to undertake major changes in their EQA procedures (ENQA,
2008). Earlier in the same decade a more thorough mapping of EQA
140 Quality Assurance in Higher Education

systems throughout Europe showed similar tendencies towards change


(Schwarz and Westerheijden, 2004). Of course, as illustrated in the two
earliest studies, the trend in procedures and practices at that time was
towards using accreditation as the dominant method in EQA. The current
picture is somewhat more blurred.
While accreditation remains a dominant method in EQA in Europe
(see also Stensaker, 2011), recent developments may show that govern-
ments are in the process of rethinking their approach to EQA. A first
trend visible in current policy-making is related to the need to drill
deeper into the core of the quality problem. The backdrop driving this
agenda is the interest in learning outcomes and the results of higher
education in general. While increased professionalisation of higher
education can be witnessed as a side-product of EQA, it is still not seen as
evidence of improved student learning or better results in teaching and
learning (see, for example, Newton, 2000). Many European countries
are currently working on implementing national qualification frame-
works, and there are signs that some countries are linking qualification
frameworks to existing EQA arrangements as part of this process
(Norway is one example). The idea is of course that learning outcomes
should be used as the new standards for assessing, evaluating and
accrediting higher education institutions.
However, it is also possible to argue that the focus on learning
outcomes could have more dramatic effects on EQA. In the last couple of
years, Sweden has launched a new system where EQA has been radically
changed, and where the process-orientation so characteristic of EQA in
Europe has been replaced by a product-orientation where only the work
of students – as a proxy for what students have learned – is assessed.
Student work is collected, read by a nationally selected group of
academics, and institutions are rewarded if the outcome is seen as being
of high academic quality. In this system, what the institution is doing
with respect to quality assurance internally is of no relevance – it is only
the product that counts. This initiative can be interpreted from several
perspectives. It can be seen as going beyond structures, routines and
processes to address quality issues head on. It can be seen as a more effi-
cient way of assessing quality, where resources can be saved regarding
site visits and so on. It can also be interpreted from an accountability
perspective, where the procedure can be accused of measuring quality
in a very narrow way, but where the government demonstrates a con-
cern for results. Interestingly, the procedure launched in Sweden can in
many ways be seen as an equivalent to similar procedures in research,
such as the Research Assessment Exercise (RAE) in the UK.
European Trends in Quality Assurance 141

A second trend in Europe – at least in countries that can be characterised


as being very mature in EQA – is to look for ways to save costs and
make EQA systems and procedures leaner (see also Power, 2007). These
political initiatives come via various ways and means but two initiatives
stand out in particular. One is to critically assess to what extent a very
standardised EQA is equally effective throughout the higher education
landscape. As diversity is a key characteristic, not only in Europe, but
also at the national level, one could imagine that those institutions with
fewer resources or weaker students and staff might be more exposed
to quality risks than those with plentiful resources and high quality
students and staff. This kind of thinking is driving the ‘risk-based’
approach to EQA, where the basic idea is to establish a procedure for
identifying study programs or institutions at ‘risk’. An example of the
turn to a risk-based approach comes from England, where the QAA are
paving the way for a change of the traditional system of EQA into a
system less focused on standardisation and more on the ‘needs’ of the
individual institution.
Other countries in Europe are trying out other cost-saving alterna-
tives. The development of national indicator systems or the launching
of national student surveys is quite common. Using indicators –
reported by the institutions themselves – perhaps with links to funding
can be quite cost-effective, and can be linked to various governmental
purposes beyond EQA. Finland is an example of a country that his-
torically has had a well-developed national database, but where the
national EQA system can be seen as quite moderate in scale and scope.
From an efficiency perspective, such an approach makes sense as costs
related to the governance of the whole higher education system are
distributed among various governance purposes.
The latter example is also of relevance to the third trend that can be
observed in Europe, namely the possible re-thinking of the ‘independence’
of EQA. The independence of EQA can be related to the establishment of
the ESG, and the notion that EQA should operate in a way that secured
the autonomy from both governmental and institutional interference.
This independence can, of course, be questioned, as many governments
have created rules and regulations that do give them with quite some
influence on agency activities. Still, the independence has certainly
given agencies a great deal of leeway concerning methods and profile,
and some agencies have used this autonomy to find new lines of demar-
cation between themselves and the governments that established them
(see also Stensaker et al., 2010). However, as governmental agendas are
moving on addressing new issues such as the need for excellence, to
142 Quality Assurance in Higher Education

stimulate competition or driving globalisation, one could argue that


agencies are being pulled closer into the ordinary governance structure
where the ‘independence’ of agencies is more problematic. In a period
where the ‘Europeanisation’ of EQA was important for boosting trust
and mutual recognition as part of the Bologna process, agency inde-
pendence was a key trust-building ingredient. In a period where one
could argue that the Bologna process is losing importance, independ-
ence is potentially a hindrance for effective national policy agendas.
The final trend is related to the emerging national policy agendas in
EQA, and to the different ideologies that are currently visible through-
out Europe in this area. EQA started out as a national responsibility,
and most agencies in Europe in this area are publicly initiated and
owned. Of course, private EQA actors have for a long time operated in
the higher education sector in Europe, but their impact and outreach
have been limited. With the establishment of the EQAR a change was
made that can fundamentally alter the whole EQA system in Europe.
Agencies listed in EQAR can, in principle, operate in another country,
dependent on the recognition of the government in that country. If
enough countries permit EQA agencies to operate in a European wide
manner, there will be a new market created for EQA services throughout
the continent. While the idea is old (see, for example, Westerheijden,
2001), it is currently being realised as some countries are allowing
agencies in other countries to operate within their borders. The most
prominent example of this change can be seen in Austria, where higher
education institutions may select their own EQA agency. However, there
are several other countries where discussions are ongoing about allow-
ing foreign agencies to enter their borders. In essence, this change can
be seen as a form of ‘privatisation’ of EQA, where national governments
no longer see this activity as a public responsibility. It is an issue that
can be left to the market to deal with (see also Dill and Beerkens, 2010),
and where efficiency can be gained as a consequence. It can also be
seen as an ideological solution and interpreted from an accountability
perspective. Not least, allowing for a market in EQA can theoretically
also boost innovation and creativeness and lead to more effective EQA
systems.

Possible implications concerning European EQA

Historically, many European EQA agencies were born in a time where


improvement and enhancement were high on the agenda, and a sub-
stantial amount of the agencies remain loyal to this purpose despite
European Trends in Quality Assurance 143

the changes noted above (see, for example, ENQA, 2008, 2012). The
question one may ask is whether the governments that once established
the agencies are also holding improvement and enhancement high on
their agenda. Examples can be given that show a very diverse picture.
Perhaps the picture is even so diverse that one could argue that the cur-
rent changes in European EQA are threatening the level of convergence
needed to maintain the European Higher Education Area. At least,
one could argue that national agendas in quality assurance currently
are showing greater dynamism than those found at the European level,
and that the policy discussions on EQA are becoming more domestically
focused. The implications of the current developments are not easy to
identify. The various agendas and purposes driving the changes noted
will most likely imply much more diversity in European EQA. Within
this diversity, some scenarios can still be predicted concerning the
future roles of agencies.
Scenario one may describe a situation where agencies can address
current dynamics within the current EQA paradigm, implying less
radical change mainly concerning methodology and on-going activi-
ties. The argument is that those agencies that are located in higher
education systems that can be characterised as ‘mature’ with respect to
their experience in quality assurance, but also those that are located in
countries where new policy demands are raised, are in a situation where
they need to demonstrate creativity and innovation – as a response to
effectiveness – efficiency, and accountability-driven agendas. The big
challenge for them is undoubtedly that they were originally designed
as a counterforce to the creativeness and dynamism of institutions,
programmes and educational offers. They were designed to create order,
system and trust through processes of standardisation. The Bologna
process and the existence of the ESG, the increasing networking within
quality assurance, and the growing collaboration across national
borders have in addition established so much consensus, norms and
‘tacit’ agreement as to how quality assurance should be conducted
and organised that a change toward innovation and creativity can be
difficult to achieve. There is a risk that too much consensus in the field
of EQA could hinder creativity and innovation. To succeed, the agencies
need to find a delicate balance between standardisation and innovation
where they must maintain the drive towards professionalisation and
standards in the area of quality assurance, but where they must be open
to more experimentation in how EQA is undertaken.
In practice, this means that agencies need to be more reflective in
their understanding and applications of some of the basic concepts
144 Quality Assurance in Higher Education

and understandings of current EQA: the ‘general method’, peer review,


self-assessment, stakeholder involvement, participation and owner-
ship to mention just a few. As EQA has developed over the years at the
European level, agencies and the whole higher education sector have
started to develop some basic beliefs about how this activity should
be organised and conducted. Typically, the need for an independent
agency and a combination of self-assessment and a peer review process
are among the key ingredients. These beliefs paved the way for the ESG
through which increased formalisation and the spread of these beliefs
took place, but as noted, these beliefs are now being questioned. Given the
new policy agendas linked to excellence, globalisation or competi-
tion emerging in various countries, agencies might be forced to take a
renewed look into their basic operations and especially how accreditation
and programme evaluation is conducted. Not least may the so-called
stakeholder involvement in these EQA schemes be a candidate for
change, forcing agencies to become much more selective regarding
candidates for panels and so on. More international panels and
academics with higher prestige selected for those panels are one likely,
although less dramatic, change.
However, methodological innovations are also likely implications.
For example, while there is much evidence related to the value of self-
assessment as part of an EQA process (D’Andrea and Gosling, 2005),
agencies may face situations where the purpose of this process is
questioned. Not least, agencies may be challenged as to the resources
and time allotted to self-evaluation processes, and may be encouraged
to find alternative ways of gathering information to be used in the
evaluation. Agencies may also be challenged by external stakeholders
that may interpret self-evaluation processes as biased view, hindering
independent comparisons between study programs or between institu-
tions. In addition, one could also question the information collected
as part of an EQA process. In general, and despite the current emphasis
on learning outcomes, there is still a lot of emphasis on input variables.
Creativeness in identifying and analysing output variables will most
likely be one of the issues occupying the agenda of many agencies.
Hence, within the existing standards of EQA, there is much room
for agency entrepreneurship. Having several peer review committees
working side-by-side in an evaluation is one example of a design that
might trigger more discussions and bring forward more divergent knowl-
edge in an EQA process. Combining EQA for teaching and learning
with EQA for research is another possibility through which higher
education institutions could be assessed in a more comprehensive and
European Trends in Quality Assurance 145

integrated way than today. Student involvement and engagement could


be stimulated by giving students much more responsibility and power
in EQA than they currently have. Undoubtedly, more examples could
be added (see, for example, Ewell, 2008).
Scenario two describes a different situation, implying quite new tasks
and activities to be undertaken by agencies. The argument here is that
increased competition will require agencies to be better positioned in
the market and lower their own risks, and that more activities and new
roles can contribute to minimising such risk. While some agencies
already undertake some kind of consultancy work as part of their opera-
tions (see, for example, ENQA, 2012), one could imagine that these
kinds of tasks would become much more important in a more competi-
tive market for EQA in Europe. If agencies are allowed to compete across
national borders, the competition will drive the institutional demand
for information on how to survive and thrive in this situation, and it
will stimulate increased agency competition in the services they can
provide, with consultancy a likely winner. De-regulation and increased
competition across national borders in EQA is an analogous situation
to what happened when the financial sector was de-regulated and tradi-
tional firms focusing on financial audit suddenly found themselves in a
more competitive situation where companies demanded more proactive
advice on how to improve their economic efficiency. In the financial
sector, the situation changed the audit business in a radical way, mak-
ing consultancy the most important dimension – both concerning their
activity profile and their surplus (see, for example, Greenwood et al.,
2012). For agencies, this scenario would have implications concerning
both future recruitment of staff and the way they relate to universities
and colleges. Agencies would most likely need to recruit staff who are
seen as having the skills and competencies needed to give strategic
advice to the institutional leadership. Agencies also need to market
themselves more actively towards the sector, and just as in the financial
sector, it is the competence and profile of the staff that is the key
competitive advantage.
However, increased consultancy activities could also be quite
demanding for agencies in other ways. First, consultancy activities
may not always be combined with the ESG regarding the openness and
transparency demanded of agencies, and this can pose a challenge for
many agencies. Secondly, if agencies maintain some of their traditional
evaluation and accreditation activities, they will need to find ways to
balance two very different sets of activities within one organisational
structure.
146 Quality Assurance in Higher Education

The functions of meta-organisations such as EQAR and ENQA may


also change as the activities of their members are transformed. For
example, the purpose and functioning of the meta-review of the diffe-
rent agencies involved in EQA may have to adapt to new circumstances.
Here one can imagine the conducting of separate reviews that target dif-
ferent activities, or even the development of new meta-level agencies.
However, one can also imagine the reputation of some agencies to be
strong enough to enable them to skip European level reviews altogether.
Hence, there is perhaps also a need for innovation and creativity for
ENQA and EQAR. The way that some reviews of agencies are currently
organised and conducted suggests that there is little added value to
them beyond granting membership.
Scenario three describes a situation where the competitive market
for EQA in Europe also forces agencies to make decisions that are far
more dramatic than just adapting to new activities and roles. If a highly
competitive market is developed in Europe, one may also foresee a
situation where some agencies run the risk of being driven out of busi-
ness altogether. In this scenario, the competition will trigger actions
that are common in other competitive markets, forcing agencies into
mergers or strategic collaboration, and the number of actors involved
in EQA will be dramatically reduced. Here, one could imagine strategic
collaboration taking place beyond the borders of Europe, and where few
conglomerates of agencies dominate the global market for both EQA and
consultancy (see also Stensaker, 2011. for a European–US comparison
in quality assurance). If this market becomes large and profitable one
should not rule out the possibility of general management and consul-
tancy businesses becoming interested in entering the competition, with
the result that higher education as a sector will be dominated by the
same management and consultancy forms found in other sectors.

Conclusion

The main argument in this article is that European EQA is developing


in a complex way, and that we may currently be at a turning point
concerning how this activity will be organised in the future. Due to
developments taking place at the national level, different scenarios can
be developed concerning the future of EQA. While some of the scenarios
may be interpreted as quite gloomy – at least from an agency point of
view – the situation should not be interpreted in a deterministic way.
There are many possibilities to further develop EQA beyond the trends
and scenarios laid out in this article. However, alternative routes forward
European Trends in Quality Assurance 147

require some joint actions in which national authorities, the agencies,


and also the higher education institutions all have a role to play. At
the national level, authorities need to develop a more nuanced view of
the use and purpose of EQA. The European Standards and Guidelines
should not be seen as a hindrance to national policy-making, although
one may suspect that this is the case in some countries. At the agency
level, there is a need for more experimentation in methods, organisa-
tion and design of EQA processes – experimentation that eventually
should be included in the ordinary processes, and not as a side-show
for increasing the legitimacy of the agencies. At the institutional level,
EQA should not be viewed as a structure to be replicated internally in
the quality assurance systems developed. On the contrary, there are
a number of arguments that support the link between institutional
quality assurance systems and the overall strategies of the individual
institution. There are still too many higher education institutions that
consider quality assurance systems as an internal control system, and
not as a tool for strategic change. This is not necessarily the fault of the
institutions alone, but should rather be considered as a form of ‘system
failure’. While standardisation has indeed brought European EQA for-
ward in many respects, there is currently a need for some innovation to
take it to the next level.

References
Brunsson, N., Jacobsson, B. and associates (2000) A World of Standards (Oxford:
Oxford University Press).
Brennan, J. and Shah, T. (2000) Managing Quality in Higher Education: An
International Perspective on Institutional Assessment and Change (Buckingham:
Open University Press).
D’Andrea, V.M. and Gosling, D. (2005) Improving Teaching and Learning in Higher
Education: A Whole Institutional Approach (Maidenhead: Open University Press).
Dill, D.D. and Beerkens, M. (2010) Public Policy for Academic Quality (Dordrecht:
Springer).
ENQA (2008) Quality Procedures in the European Higher Education Area and Beyond –
Second ENQA Survey (Helsinki: ENQA).
ENQA (2012) Quality Procedures in the European Higher Education Area and Beyond –
Third ENQA Survey (Brussels: ENQA).
Ewell, J. (2008) US Accreditation and the Future of Quality Assurance (Washington
DC: The Council for Higher Education Accreditation).
Fisher, E. (2004) ‘The European Union in the age of accountability’, Oxford
Journal of Legal Studies, 24(4), 495–515.
Greenwood, R., Raynard, M., Kodeih, F., Micelotta, E.R. and Lounsbury,
M. (2011) ‘Institutional complexity and organizational responses’, Academy of
Management Annals, 5(1), 317–71.
148 Quality Assurance in Higher Education

Levi-Faur, D. (2011) ‘Regulatory networks and regulatory agencification: Towards


a single European regulatory space’, Journal of European Public Policy, 18(6),
810–29.
Massy, W.F. (2003) Honoring the Trust: Quality and Cost Containment in Higher
Education (Bolton, Massachusetts: Anker Publishing).
Newton, J. (2000) ‘Feeding the beast or improving quality? Academics’ percep-
tions of quality assurance and quality monitoring’, Quality in Higher Education,
6(2), 153–63.
Olssen, M. and Peters, M.A. (2005) ‘Neoliberalism, higher education and the
knowledge economy: From the free market to knowledge capitalism’, Journal
of Educational Policy, 20(3), 313–45.
Power, M. (2007) Organized Uncertainty: Designing a World of Risk Management
(Oxford: Oxford University Press).
Rosa, M.J., Tavares, D. and Amaral, A. (2006) ‘Institutional consequences of qual-
ity assessment’, Quality in Higher Education, 12(1), 145–59.
Rozsnyai, C. (2003) ‘Quality assurance before and after Bologna in the Central
and Eastern Region of the European Higher Education Area with a focus on
Hungary, the Czech Republic and Poland’, European Journal of Education, 38(3),
271–84.
Schwarz, S. and Westerheijden, D.F. (eds) (2004) Accreditation and Evaluation in
the European Higher Education Area (Dordrecht: Kluwer Academic Publishers).
Stensaker, B. (2003) ‘Trance, transparency and transformation. The impact of
external quality monitoring in higher education’, Quality in Higher Education,
9(2), 151–59.
Stensaker, B. (2011) ‘Accreditation of higher education in Europe – Moving
towards the US-model?’, Journal of Educational Policy, 26 (4), 757–69.
Stensaker, B. and Harvey, L. (2011) Accountability in Higher Education (New York:
Routledge).
Stensaker, B., Harvey, L., Huisman, J., Langfeldt, L. and Westerheijden, D.F. (2010)
‘The impact of the European standards and guidelines in agency evaluations’,
European Journal of Education, 45(4), 577–87.
Westerheijden, D.F. (2001) ‘Ex oriente lux? National and multiple accredita-
tion in Europe after the fall of the wall and after Bologna’, Quality in Higher
Education, 7(1), 65–76.
Westerheijden, D.F. (1999) ‘Where are the quantum jumps in quality assurance?
Developments of a decade of research on a heavy particle’, Higher Education,
38(2), 233–54.
Westerheijden, D.F., Brennan, J. and Maassen, P.A.M. (eds) (1994) Changing
Contexts of Quality Assessment (Utrecht: Lemma/CHEPS).
Westerheijden, D.F., Stensaker, B. and Rosa, M.J. (eds) (2007) Quality Assurance in
Higher Education (Dordrecht: Springer).
Westerheijden, D.F., Stensaker, B., Rosa, M.J. and Corbett, A. (2014) ‘Next genera-
tions, catwalks, random walks and arms races: Conceptualising the development
of quality assurance schemes’, European Journal of Education.
10
Recent Trends in US Accreditation
Judith S. Eaton

Introduction

Accreditation plays a central role in higher education in the US, with


more than 8,200 institutions and 20,000 programs accredited by one or
more of the 85 recognised accrediting organisations operating through-
out the country (Council for Higher Education Accreditation, 2013).
This paper examines three dimensions of US accreditation: what it has
been, recent trends affecting its operation, and its likely future. It con-
cludes by contrasting this likely future with a more desirable scenario,
urging that the higher education and accreditation communities work
to realise a future of the desirable rather than the likely.

What US accreditation has been

Accreditation in the US is a very mature enterprise, dating back more


than 100 years. ‘Accreditation’ is a nongovernmental means of assuring
and improving academic quality in colleges, universities and programmes
beyond secondary school. Initially established to define ‘college’ and to
assist with transfer of credit among institutions, accreditation was created
by higher education itself, not government (Orlans, 1975). To this day, it
is managed and funded by colleges and universities, and in 2011 was a
$114 million enterprise. Accreditation is a collegial, volunteer enterprise,
with faculty and academic administrators serving as members of accredi-
tation decision-making bodies and on teams that visit institutions as part
of an accreditation review. More than 19,000 such volunteers were active
in 2011. Accreditation activity is decentralised, with the 85 recognised
accrediting organisations functioning as separate and independent legal
entities (Council for Higher Education Accreditation, 2012).

149
150 Quality Assurance in Higher Education

Accreditation is peer-based, trust-based and mission-based. It was


built on a commitment to self-regulation: that colleges and universities
have primary responsibility for their operation and effectiveness, not
government or other external actors. To carry out this self-regulation,
accreditation relies on peer review, not government: academics review-
ing academics or professionals reviewing professionals. Accreditation
review begins with institutional mission: that the purpose of a college or
university is a central factor in determining its academic quality.
Accreditation is a vehicle through which academics have provided
leadership for quality through review of one another’s work, whether
teaching, research or service. Professionals have come together to
examine each other based on a shared commitment to the core values
of self-regulation, peer review, a commitment to mission, institutional
autonomy and academic freedom. Accreditation has meant the aca-
demic community establishing and maintaining norms for quality on
its own, independent of government or any other external forces. It is
an extraordinary example of effective civil society.

Six trends

Six recent trends are having a significant impact on accreditation as it


was initially conceived and has operated. These are (1) the driving force
of accountability, (2) the growing dominance of government, (3) the
advance of a utilitarian approach to higher education, (4) the impact of
innovation or ‘disruptive technologies’, (5) the increasing importance
of international quality assurance and (6) a questioning of the faith in
the core values that have been at the centre of accreditation. The likeli-
hood of a major transformation of accreditation is greater at this time
than at any other time in its history.

The driving force of accountability


‘We must have greater accountability’ has been at the heart of many
discussions about accreditation, especially since the release of the
first major government report on higher education in 20 years, the
2005–2006 report of the US Secretary of Education’s Commission on
the Future of Higher Education US Department of Education, 2006).
As higher education has both grown and become more costly for stu-
dents and as government financing of higher education has increased,
calls for greater accountability from accreditation have also increased.
‘Accountability’ is about colleges and universities providing evidence
of student achievement and institutional performance. It is about
Recent Trends in US Accreditation 151

greater transparency to the public. It is about building capacity for


comparisons among institutions as a means of judging quality (The
Chronicle of Higher Education, 2013).
Increasingly, accountability is also about letting others decide quality.
For the first time in the history of higher education, actors outside the
academy are coming to play a dominant role in judging quality. This
takes place through ranking systems such as US News and World Report (US
News and World Report, 2013), foundations seeking to establish account-
ability practices in higher education such as the Lumina Foundation’s
Degree Qualifications Profile (Lumina Foundation, 2011) and through
government and private sector interactive databases such as the US
Department of Education’s (USDE) College Navigator (US Department of
Education, National Center for Education Statistics, 2013) and Education
Trust’s College Results Online (The Education Trust, 2013). While these
approaches may be useful and valuable, they are not always carried out by
individuals with experience and expertise in higher education.
While accountability in accreditation is essential, the current empha-
sis on greater accountability has the potential to upend the traditional
collegial approach to quality that has characterised this enterprise to
date. It can transform accreditation from a collegial activity to primarily
a compliance review.

The growing dominance of government


US accreditation, in contrast to quality review in most other countries,
has always been a private sector or nongovernmental enterprise. At
the same time, however, accreditation does have a relationship with
government, as do many private sector organisations and institutions.
This began in the 1950s, when accreditation entered into a voluntary
partnership for quality with the US federal government.
The government, instead of developing its own capacity for evaluation
of the quality of higher education, decided to rely on nongovernmental
accrediting organisations for this purpose. This reliance was accompanied
by an agreement from the accrediting organisations to undergo some
review by government, a process that has come to be called ‘recognition’.
Government needed this assurance of quality from accreditors to make
sure that the federal money going to colleges and universities for
student grants and loans, research and programme development would
be well spent. Today, these federal funds amount to $175 billion annu-
ally, more than one-third of all spending on higher education. An insti-
tution or programme receiving federal funds had to be accredited by an
accrediting organisation that the federal government found satisfactory.
152 Quality Assurance in Higher Education

Accreditation remained a nongovernmental enterprise, but underwent


some federal review, which continues to this day.
The ‘growing dominance of government’ refers to how this recog-
nition relationship, over the years, has been the primary vehicle for
the steady growth of federal authority over accreditation and thus the
determination of academic quality in higher education. While the US
does not have national standards, rankings or a national qualifications
framework for colleges or universities, USDE undertakes a periodic
review of accrediting organisations that is both detailed and prescrip-
tive, judging the day-to-day operation of accrediting organisations
and, increasingly, the decisions they have made about the accredited
status of colleges and universities. In the US, it is becoming common to
hear that the federal government is transforming itself into a ministry
of education – moving from a reliance on colleges, universities and
accreditors for judging academic quality to a reliance on the federal
government – through accreditation.
The expanding role of government has the potential to undermine
both the collegial improvement dimension of accreditation as well as
peer review. It is a concern for faculty because this government domi-
nance has the potential to challenge academic freedom – the freedom
of faculty to decide what is to be taught, to whom, according to what
standards and with what colleagues. It is a concern for academic leaders
because of the potential to compromise institutional autonomy and the
emphasis on mission that has long characterised higher education and
been central to its diversity.

The advance of the utilitarian


Not only in the US, but in a number of other countries as well, a
good deal of the discussion surrounding higher education is less and
less about its central role in the intellectual development of students.
Instead, the conversation focuses more and more on the economic
development role of colleges and universities. This is what some call a
‘corporatisation’ of higher education or creating an ‘industrial’ model
of higher education (Thrift, 2012). In this context, degrees are valued
primarily to the extent that they lead to employment. The worth of a
degree is measured by the earnings of students, often described as return
on investment. The quality of a college or university is tied to how
many students are placed in jobs and whether those jobs are a direct
consequence of the students’ educational experiences. Even discussion
these days about the value of the liberal arts and liberal education are
often tied to the impact on obtaining and maintaining employment.
Recent Trends in US Accreditation 153

Advocacy for liberal education for its intrinsic value, education for the
life of the mind or intellectual development, is less often part of the
national dialogue.
For accreditation, this utilitarian emphasis has meant that assur-
ing and improving quality now involves additional attention to job
placement rates and the relationship between the debt that students
incur to pay tuition and fees and their subsequent earnings. It means
that accreditors are taking a closer look at the number of credits that
students earn and the length of time to gain a degree in relation to
employment. This means more accreditor attention to the economic
development role of a college or university rather than its intellectual
development role.

Innovation or ‘disruptive technologies’


Following the enormous growth in enrolments that has taken place in
the for-profit higher education sector in the US, massive open online
courses or ‘MOOCs’ have emerged. MOOCs are online, non-credit, free
coursework available worldwide, often provided by well-known and
highly respected professors. They are considered by many to be the
cutting edge of innovation. MOOCs may be accompanied by badges,
or evidence that students have completed a MOOC course. They have
received significant attention, in part because the establishment of
MOOCs has been led by elite research universities in the US such as
Stanford University and the Massachusetts Institute of Technology.
At the same time, a renewed emphasis on competency-based educa-
tion and assessment of prior learning has emerged, accompanied by
extensive discussion that reflects a growing acceptance and valuing of
online coursework. Another innovation, coursework created and offered
outside a college or university at very low tuition is increasingly avail-
able from private sector companies like StraighterLine (Straighterline,
2013). This is reducing the price of higher education and is available
to those who do not want to enrol and attend a traditional college or
university. Some of this coursework has been accepted for credit by
traditional institutions.
Some colleges and universities are experimenting with a three-year
undergraduate degree, in contrast to the conventional four-year bac-
calaureate that has been a staple of US higher education. Some states
are encouraging colleges and universities to develop a $10,000 under-
graduate degree, also reducing the price of higher education to an
amount considerably below the average tuition required to earn the
baccalaureate.
154 Quality Assurance in Higher Education

Whether characterised as innovative or disruption, these changes


offer convenient alternatives to traditional higher education and its
cost to students. A student can obtain some credit or other recognition
of post-secondary work without attending a college or university and
without having to resort to the complicated and costly student financial
aid system that is used to fund this attendance. As might be expected,
these developments have attracted both supporters and critics.
For accreditation, the challenge is to determine whether its quality
review system, geared to institutions, credits and degrees, can or should
extend to MOOCs or other forms of extra-institutional education. This
conversation is just beginning in the US, but has the potential to result
in a major shift in the focus of accreditation, if not move it to the
sidelines in the review of the quality of higher education. If students
increasingly chose to obtain some kind of higher education through
experiences that are not credit, do not lead to degrees and are acquired
outside a college or university, what would be the role of accreditation?
Who would determine quality?

International quality assurance


Recent developments in international quality assurance are also affect-
ing the US in both higher education and accreditation. These include
the European Bologna Process and similar processes in other regions
of the world, the development of qualifications frameworks, the
expansion of ranking systems, and projects such as the Organisation
for Economic Co-operation and Development’s Assessment of Higher
Education Learning Outcomes that seek to establish international indi-
cators of student achievement (Organisation for Economic Cooperation
and Development, 2013).
Up until these developments, US accreditation, although nongovern-
mental, had been similar in practice to quality assurance in other coun-
tries. All involve standards-based or guidelines-based review of academic
quality that is focused on colleges, universities and programmes. All engage
faculty and academic administrators. All share a focus on either assuring
quality or improving quality or both through self-review and peer review.
However, the US now differs in that it has not developed a number of
the tools for quality used by other countries or used regionally, either
through accreditation or in addition to accreditation. The US does
not have national standards or guidelines for academic quality such
as the European Standards and Guidelines (European Association for
Quality Assurance in Higher Education, 2011). It does not have national
indicators of student achievement. There is no national qualifications
Recent Trends in US Accreditation 155

framework such as those found in many countries. Rankings do exist,


but they are solely voluntary and have emerged from the commercial
sector, not colleges, universities, accreditation or government. In addition,
the US does not participate in regional standard setting, qualifications
frameworks or rankings.
As mobility grows and more students are engaged in cross-border
higher education, there is pressure on US colleges and universities to
provide information about their quality that goes beyond accredited
status to their effectiveness as judged by qualifications frameworks or
rankings. Thus, US News and World Report rankings as well as the major
international rankings for colleges and universities are playing a larger
role in determining quality (Hazelkorn, 2011). Implementation of the
Lumina Foundation Degree Qualifications Profile may provide informa-
tion similar to that of qualifications frameworks in other countries. It
will be up to accreditors and institutions to determine how or whether
to address international approaches to judging quality.

Questioning faith in core values


It is the impact of these trends in the aggregate that raises the question
of whether this is the beginning of a diminishing faith in core values
that have been at the heart of accreditation and higher education. The
emphasis on greater accountability, a growing role for government in
judging quality, an essentially utilitarian approach to the value of a
collegiate education, innovation leading to higher education outside
of traditional institutions and the importance of accommodating key
features of international quality assurance such as qualifications frame-
works and rankings raise fundamental questions.
What will happen to the importance and role of the core values of
institutional autonomy, reliance on institutional leadership to judge
quality, academic freedom, peer review, self-regulation and commit-
ment to mission? Some of the trends discussed above challenge the
historically central purpose of higher education: the intellectual devel-
opment of the student. Is education for work now more important that
the development of intellectual skills? Does the emergence of higher
education outside traditional colleges and universities mean that the
core values will no longer anchor the higher education enterprise?

A likely future for US accreditation

Because higher education is essential and expensive, future conside-


rations of its quality will likely be considered too important to be left solely
156 Quality Assurance in Higher Education

to higher education and accreditation. Nongovernmental accreditation


will increasingly function as an arm of the government, with federal
agencies directing its operation, more often deciding what counts as
quality and involving itself in the accreditation of individual colleges and
universities. Emphasis on collegial peer review for quality improvement
will be crowded if not diminished by emphasis on compliance with
standards to achieve accountability. This is in contrast to the historical
approach of government in examining the basic soundness of accredit-
ing organisations and holding them accountable for quality.
A number of academics are gravely concerned that this larger role for
government might lead to a greater standardisation of quality expectations.
Individual institutions may come to be viewed as part of a larger network
in which, ultimately, some centralised authority, government or other-
wise, sets standards and judges quality. This development can undermine
the leadership role of individual institutions, the role of mission in judg-
ing quality and the autonomy of institutions to set their own direction.
This has a potential to limit academic freedom in the academy as well.
The question of ‘who decides quality?’ already yields a multiple
answers: accreditors, rankings, employers and government. Quality
judged by a range of sources is likely to continue, with the impact that
accreditation will function as one of these sources and no longer as the
dominant authority. Accreditation’s longstanding position as the primary
means of assuring and improving quality is likely to shift. The concern
here is not that accreditation’s voice would be one among many. Rather,
it is that the other voices often lack the experience and expertise that
academics bring to the examination of higher education quality. This
is likely to be exacerbated by the continued growth of innovations that
expand higher education offerings apart from traditional institutions, an
arena in which accreditation, for the most part, does not operate.
More government authority over quality, greater accountability, less
opportunity for academic leadership at the institutional level and with
faculty – this is the likely future for accreditation. Whether higher edu-
cation will benefit from these changes is not clear. What is clear is that
the fundamentals on which accreditation has been built and operated
for more than one hundred years will shift.

A more desirable future

The likely is not the inevitable. There are steps that accreditation and
higher education can take to achieve a balance between the trends and
the historically effective features and values of accreditation.
Recent Trends in US Accreditation 157

The academy needs to take more public ownership of academic quality


to balance the growing interest of government and others in judging
quality. While this ownership may be evident to many within higher
education and accreditation, it is less visible to government and the
public. This more-public ownership can take many forms, but needs to
be centred on evidence of effective performance of colleges and uni-
versities. What skills do students develop? What have they achieved?
What happens to students who graduate or complete other educational
goals? Colleges, universities and accreditors need to answer these
questions.
Governmental and other external demands for accountability can
be balanced through higher education and accreditation that provide
more leadership in defining accountability expectations and expand
effective accountability practices, tying accountability to an academic
structure that places primary emphasis on academic leadership from
faculty and academic administrators in colleges and universities. If
higher education and accreditation influence accountability norms,
these can be better aligned with the values of autonomy and academic
freedom.
The academy needs to reflect on the accreditation–federal govern-
ment relationship, how this might be changed or even whether it
can be sustained. This relationship, as indicated above, is voluntary
on the part of accreditors. Can there be a rethinking of the distribu-
tion of responsibilities between accreditation and government such
that primary responsibility for quality continues to rest with colleges
and universities? How might government have sufficient confidence
in accreditation such that it remains outside the arena of academic
judgements? Higher education and accreditation involve large
sums of federal money and thus must be accountable and answer-
able to government. The issue is not whether they are accountable,
but how they are answerable and how to maintain the academy’s
leadership role in judging quality while meeting accountability
expectations.
To address both the increasingly utilitarian approach to higher educa-
tion and emerging innovation that places the educational experience
outside colleges and universities, a strong public case needs to be made for
the centrality of intellectual development and the value of the traditional
collegiate experience – whether this is obtainable outside the institutions
and how. At least some of the innovations discussed above are likely to
be sustained. If they play a major role through large numbers of students
preferring these alternatives to traditional institutions, it needs to be clear
158 Quality Assurance in Higher Education

whether or not these newer approaches to higher education can and will
focus on intellectual development.
Internationalisation of higher education and quality assurance is
a development in which the US enthusiastically participates. Over
time, colleges, universities and accrediting organisations will develop
means to assure effective communication about quality, whether or
not structures such as qualifications frameworks or rankings emerge
in the US.
A more desirable future is one of balance, that sustains the valu-
able features and core values of accreditation and quality review while
addressing the expectations of greater accountability and flexibility
inherent in the recent trends.

Summary

The current trends affecting US accreditation constitute a challenge to


the fundamentals of accreditation and the institutions and programs that
are accredited. The likely future for accreditation if these trends are not
influenced in some way is sobering. Accreditation is less likely to be the
dominant means of judging academic quality, but one of a number of
voices judging quality, voices without academic expertise and experience.
It will increasingly be controlled by government, with implications
for the independence of not only accreditation, but also colleges and
universities. Perhaps of greatest significance, the core values that
accompany accreditation – self-regulation of academic quality, peer
review, institutional autonomy, the importance of institutional mission,
institutional academic leadership and academic freedom – may them-
selves be diminished or transformed.
This likely future need not prevail, however. If the accreditation and
higher education communities work together, they can balance govern-
ment influence and calls for accountability by sustaining institutional
and faculty academic leadership for accountability as framed by the
academy. They can further engage innovation to assure that creative
change in higher education is accompanied by commitment to a vision
of intellectual development of all students. Accreditation and higher
education can challenge the current focus on the utilitarian in higher
education through emphasis liberal education and the liberal arts and
their centrality in the society. Above all, through these actions, accredi-
tation, colleges and universities will be sustaining and enhancing the
core values that have produced a higher education enterprise of extraor-
dinary access and quality for many years.
Recent Trends in US Accreditation 159

References
Council for Higher Education Accreditation (2012) CHEA Almanac of External
Quality Review (Washington, DC: Council for Higher Education Accreditation).
Council for Higher Education Accreditation (2013) Database of Institutions and
Programs Accredited by Recognized United States Accrediting Organizations, http://
www.chea.org/search/default.asp (accessed 22 September 2013).
European Association for Quality Assurance in Higher Education (2011) ENQA
Position Paper on Transparency Tools, 4 March.
Hazelkorn, E. (2011) ‘Questions abound as the college-rankings race goes global’,
The Chronicle of Higher Education, 13 March, http://chronicle.com/article/
Questions-Abound-as-the/126699/ (accessed 23 September 2013).
Lumina Foundation (2011) The Degree Qualifications Profile, http://www.lumina
foundation.org/publications/The_Degree_Qualifications_Profile.pdf (accessed
23 September 2013).
Orlans, H. (1975) Private Accreditation and Public Eligibility (Lexington, MA:
Lexington Books, D.C. Heath and Company).
Organisation for Economic Co-operation and Development (2013) Testing stu-
dent and university performance globally: OECD’s AHELO, http://www.oecd.
org/education/skills-beyond-school/testingstudentanduniversityperformance
globallyoecdsahelo.htm (accessed 23 September 2013).
StraighterLine (2013) Earn College Transfer Credit, http://www.straighterline.com/
how-it-works/credit-transfer/ (accessed 23 September 2013).
The Chronicle of Higher Education (2013) ‘College Completion: Who graduates
from college, who doesn’t and why it matters’, http://collegecompletion.
chronicle.com/about/ (accessed 23 September 2013).
The Education Trust (2013) College Results Online, http://www.edtrust.org/issues/
higher-education/college-results-online (accessed 23 September 2013).
Thrift, N. (2012) ‘The future of big ed’, The Chronicle of Higher Education,
6 December, http://chronicle.com/blogs/worldwise/the-future-of-big-ed/31122
(accessed 23 September 2013).
US Department of Education (2006) A Test of Leadership: Charting the Future of US
Higher Education: A Report of the Commission Appointed by Secretary of Education
Margaret Spellings, http://www.ed.gov/about/bdscomm/list/hiedfuture/index.
html (accessed 18 September 2013).
United States Department of Education, National Center for Education Statistics
(n.d.) College Navigator, http://nces.ed.gov/collegenavigator/ (accessed
21 September 2013).
US News and World Report (2013) http://www.usnews.com/rankings (accessed
23 September 2013).
Williamson, J. (2012) ‘Massive online open courses: What they are and how
they help students’, Distance Education.org., 30 April, http://www.distance-
education.org/Articles/Massive-Online-Open-C (accessed 21 September 2013).
11
Quality Assurance in Latin America
María José Lemaitre

Trends and challenges of higher education

Higher education has experienced significant changes from a social


perspective. From a relatively enclosed situation, centred in universities,
focused on theoretical and conceptual teaching and learning in the arts,
sciences and humanities, and in advanced research and scholarship, it
has moved to centre stage in most countries, with an increasing focus
on preparing its graduates for the labour market. It is offered by diffe-
rent providers, to a large and diversified student population, in a wide
range of teaching, research, consultancy and service functions.
A review of more than 20 countries carried out by OECD has identified
a number of major trends in higher education, which are briefly outlined
in the following pages. One of the more significant ones, however, is
summarised in the name of the report: Tertiary Education for the Knowledge
Society. The expansion of higher education systems, the diversification of
provision and the increased heterogeneity of the student body has made
it necessary to open the field from the traditional view of higher educa-
tion to the wider one of tertiary education, to reflect the growing diversity
of institutions and programmes (OECD, 2008, p. 25). While this widening
view of tertiary education is necessary, it is also important to distinguish
tertiary education from post-secondary education, which covers a far wider
range of programmes, with very different requirements and characteristics.
The main trends and contextual developments identified by OECD
detailed in the following sections.

Expansion of tertiary education systems


The student population doubled between 1991 and 2004, with the most
significant increases in East Asia and the Pacific, Sub Saharan Africa and
160
Quality Assurance in Latin America 161

South and West Asia. North America and Western Europe are the only
regions of the world where growth is below average, but this can be
explained by the high coverage already achieved in those regions. Most
OECD countries show participation rates of more than 50 per cent for
a single age cohort, and participation rates are also increasing in other
countries, although at a slower pace. Enrolment in Latin America and
the Caribbean grew from 8.4 million students in 1990 to 21.8 million
in 2008 (Brunner and Ferrada, 2011).

Diversification of provision
Diversification has different faces: the emergence of new institution
types, the multiplication of educational offerings within institutions,
the expansion of private provision and the introduction of new modes
of delivery. Among these, the growth of non-university sectors is recog-
nised by OECD as one of the most significant structural changes in
recent times. This in part is the result of a more innovative response to
the increasingly diverse needs of the labour market, but is also the result
of regional development strategies for increasing access to tertiary edu-
cation, or for educating a larger proportion of students at a lower cost
through the introduction of short programmes. However, not all these
new programmes are offered in different institutions. In many cases,
provision is diversified within institutions; thus, traditional universi-
ties are expanding their range of programmes, including short cycle or
vocational programmes.
Private provision has also expanded, and some countries (such as
Korea, Japan or Chile) have more than 7 per cent of their students
enrolled in private institutions.
Finally, more flexible modes of delivery are emerging everywhere.
Distance learning, online delivery of standard courses in face-to-face
programmes, small seminars and interactive discussions, part-time
courses and module-based curricula, continuing education, and non-
degree courses are all new means for addressing the new needs and
demands of students and the labour market.

More heterogeneous student bodies


The expansion of the student body means not only more students, it
also means different students: age, gender, qualifications, cultural capital
and expectations are diverse and make it difficult for many tertiary
education institutions, used to dealing with traditional students (mostly
male, young, highly qualified and aspiring to an academic or profes-
sional career), to adjust to new needs and demands. A large proportion
162 Quality Assurance in Higher Education

of these students are the first generation in their families to reach tertiary
education, and the lack of social networks to support them also poses
new challenges for tertiary education institutions. These students have
different learning needs, which means new curricular and pedagogical
requirements, as well as different learning environments, which must
take into account the different perspective these students bring to their
educational experience.

New funding arrangements


Increasing demands for public funding from multiple sectors (health,
environment, primary and secondary education, and others) make
it necessary to prioritise the allocation of resources and reduce the
amount that governments are willing to dedicate to higher education.
In this context, public funding tends to be linked to policy objectives,
through programme-based targeted funding, competitive bidding or
performance-based funding. In many cases, the allocation of public
funding is linked to indicators of effectiveness or efficiency, or to the
outcomes of self assessment and external review processes. At the
same time, many countries are increasing the proportion of resources
allocated for student aid programmes, through grants and increasingly
through repayable loans.
The need for new sources of funding means a significant increase
in the proportion of resources coming from private entities, primarily
through the introduction or increase of tuition fees and also through
the commercialisation of research and institutional facilities or staff.

Increasing focus on accountability and performance


Most countries have witnessed the development of formal quality
assurance systems. The decline in public credibility of higher education
identified by Peter Ewell (CHEA, 2008) as a significant change in the
environment of higher education is not just a US occurrence, but some-
thing that is apparent in many countries and that is closely linked to
external mechanisms for accountability. The expansion of tertiary edu-
cation systems, their increased diversification, the need to legitimise the
use of public funds and increased market pressures are all factors that
subject higher education to close scrutiny. Its quality, effectiveness and
efficiency are no longer taken for granted, but must be demonstrated
and verified.
Of course, concerns for quality as well as the quality assurance of
higher education by state authorities, institutional leaders or higher
education institutions themselves are by no means new practices,
Quality Assurance in Latin America 163

but they were traditionally restricted to within the higher education


system itself. What is new is the social relevance granted to the quality
of higher education and, therefore, the need for higher education insti-
tutions to find new partners and develop links the social and productive
environment, to be able to identify and find answers that are relevant
vis-a-vis societal needs.

New forms of institutional governance


The OECD report recognises changes in the leadership in tertiary
education institutes as one of the significant changes: the need for
improved management and for a clear demonstration that institutions
effectively offer ‘value for money’ means that leaders are increasingly
seen as managers or entrepreneurs. While managerialism is strongly
criticised in some contexts, it seems unavoidable to develop govern-
ance schemes that increase the capacity of the institution to take into
account internal and external stakeholder needs, to develop new part-
nerships and find new sources of income, to enhance the prestige of
the institution and to be able to compete in an increasingly complex
sectoral context.

Global networking, mobility and collaboration


The increasing internationalisation of tertiary education has different
aspects worth mentioning. One of them is the mobility of academics,
students and professionals, which in turn is often related to the inter-
nationalisation of curricula, at least in some areas such as engineering,
business and management studies, information technology and bio-
technology. Another is the mobility of education itself, in the guise of
transnational or cross-border tertiary education, either in face-to-face
programmes or through e-learning mechanisms. Finally, international
collaboration and networking between institutions in different areas of
teaching and research is also a significant factor in the organisation of
tertiary education in many countries.
The main resulting feature of higher education nowadays is its
increased diversity: new providers, new modes of teaching, new
programmes, new governance strategies – all try to develop appropriate
and relevant responses to a wide range of needs from students and
employers. As such, diversity can contribute to increased social and
professional mobility, to innovative practices, and indeed, to quality.
However, diversity has also meant that higher education has turned
into an unknown quantity in many cases, and that the social trust in
the services it renders has been substantially eroded.
164 Quality Assurance in Higher Education

Several questions come to mind in this context:

• Diversity reduces national trust in higher education: How can we


identify reliable provision in a diverse system?
• Diversity should translate into different definitions of quality:
How can we move away from the strong pressure to look at higher
education from a traditional, academic-oriented perspective, and
begin to take into account a wide range of purposes, of ways to
relate to knowledge (both in its generation, its understanding and
its transmission), of student populations, of teaching and learning
conditions and objectives?
• A diverse student population demands new curricula, new teaching
practices, new methods for assessing learning: How can we contribute
to developing the skills needed to respond to these new demands?
How can we prepare academic staff for this new perspective?
• Increased and diversified access requires articulation: No education
level is final. How can we recognise prior learning in an effective and
efficient way?

These and other challenges have led governments to see the develop-
ment of quality assurance processes as a good solution. The question
is whether they will do it mostly through the establishment of strong
measures of regulation and control (in what has been called hard power)
or whether they will work with quality assurance agencies to promote
processes that while providing an adequate measure of accountability,
also promote institutional responsibility and self regulation (using a soft
power approach).

Quality assurance responses in Latin America

The above trends have important expressions in Latin America, as can


be seen in the Report on Higher Education in Iberoamerica published
by CINDA (Brunner and Ferrada, 2011). The former version of the same
report (CINDA, 2007) already highlighted the role of higher education,
particularly with regard to the development of human capital and its
impact in national growth and competitiveness; increased opportuni-
ties for insertion in the labour market and the promotion of social
mobility; and strengthening of the institutions that make democratic
governability and national development possible.
Latin American responses to these challenges have been varied, but
they can be classified into three groups: The development of national
Quality Assurance in Latin America 165

quality assurance systems, the search for sub regional arrangements and
the establishment of a regional network.

National quality assurance systems


National quality assurance systems were established in many Latin
American countries during the nineties. In most cases, they were
promoted by the government, mostly as a way to regulate the estab-
lishment of private higher education institutions, but developed in
different ways, according to the needs of the respective national higher
education systems.
In general, most Latin American quality assurance systems deal
with different purposes: quality control, usually through licensing
mechanisms; accountability, through accreditation of programmes or
institutions; and improvement, either through the approach used in
accrediting processes or through more audit-like external reviews. Some
countries work towards these different purposes with one agency, deal-
ing with the different types of assessment, while others have separated
these functions across several agencies. There are agencies that belong
to the government while others are public and nongovernmental; still
others are private entities and in a few cases, they are owned by the
higher education institutions themselves.
Table 11.1 gives a general overview of the quality assurance arrange-
ments in place.

Sub-regional arrangements
It is interesting to note that there are two significant sub-regional
arrangements that seek to harmonise standards and procedures for qual-
ity assurance: that of the countries grouped under MERCOSUR – the
common market of the South – and the initiatives in Central America.

ARCU-SUR, an international agreement


In 1998, the Ministers of Education of Argentina, Brazil, Paraguay and
Uruguay (the four member countries of MERCOSUR), plus Bolivia and
Chile as observers, signed a Memorandum of Understanding that led to
the organisation of a regional accreditation mechanism. The main pur-
poses of this initiative were to improve the quality of education in the
region, contribute to the recognition of qualifications among countries
and to student and professional mobility and, in general, to promote
and support regional integration.
After a long period of negotiations and discussion, which led to
the definition of shared quality standards for a selected group of
Table 11.1 Quality assurance mechanisms in Latin American countries

Agency Purpose/focus Ownership Degree of


consolidation

Argentina CONEAU Licensing of new institutions and Government Strong


programmes (C)
Programme accreditation (C)
Institutional assessment (V)

Bolivia CEUB Assessment of public universities (V) Institutional Weak


Ministry of Licensing and review of private Government Strong
Education universities; certification of degrees
granted by private universities (C)
Brazil SINAES Review of programmes and Public Medium
institutions (V)
Examination of HE students
CAPES Review of graduate programmes (C) Public Strong

Chile CNED Licensing of private higher Public Strong


education institutions (C)
CNA Institutional accreditation (V) Public Medium
Authorisation of accrediting (being revised)
agencies (C)
Specialised Programme accreditation (V*) Private Medium
agencies

Colombia CONACES Evaluation of threshold standards Government Strong


for programmes (C)
CNA Programme and institutional Public Strong
accreditation
ICFES Admission and exit examination of Government Strong
students (C)
Costa SINAES Programme accreditation for Institutional Strong
Rica member institutions (all the public
HEI and some private ones) (V)
Ecuador CEAACES Institutional accreditation (C) Government Medium
(recently
re-established)

Mexico CIEES Programme reviews Public Strong


COPAES Authorisation of accrediting agencies Public Strong
Specialised Programme accreditation Private Strong
agencies
CENEVAL Admission and exit examination of Public Strong
students

Panama CONEAUPA Institutional and programme Public Medium


accreditation (V)
Paraguay ANEAES Programme accreditation (V) Public Medium
(being revised)

Uruguay Ministry of Licensing of private higher Government Strong


Education education institutions

Note: C: Compulsory; V: Voluntary; V*: Voluntary, except for teacher training and medicine.
Source: Lemaitre and Mena, 2012.
Quality Assurance in Latin America 167

programmes,1 an experimental arrangement was implemented in 2002.


This arrangement, focusing on Medicine, Engineering and Agronomy
programmes, was to be applied by the national agencies in each
country. Agencies agreed to adjust their process to a set of common
procedures, that is, to base their decisions on the MERCOSUR standards,
ask programmes for a self assessment report carried out in accordance
to an approved manual, and to carry out an external review visit with
a team trained in the MERCOSUR standards and procedures. This team
had to include two members from MERCOSUR countries other than
the host country, who had the additional function of ensuring that the
agreed upon standards and procedures had been followed.
The operation of this mechanism was tested in the six countries; after
adjusting both standards and procedures in accordance to the lessons
learned during the test, ARCU-Sur was formally established as a regional
accreditation mechanism in 2006. The agreement stated that all accredita-
tion decisions made following this process would be recognised by all the
participating countries,2 and that degrees from accredited programmes
would enjoy automatic recognition, albeit only for academic purposes.
The end result has not been too encouraging in terms of mobility or
recognition of qualifications. Intraregional mobility is not a priority for
Latin American countries (only Argentina and Chile are significant des-
tinations for Latin American students), and this agreement – signed by
the Ministers of Education – carried little weight with universities which
operate with almost total autonomy from their respective governments.
However, the process had a strong impact on the strengthening of
national agencies, and the development of harmonised standards in the
region – a not negligible effect.

Central America
The Council for Higher Education in Central America (CSUCA), which
encompasses the public universities of the region, began work on the
development of a region-wide assessment process for higher education
with the support of German cooperation agencies in 1995. This process
developed basic standards for university programmes, trained hundreds
of academic staff and external reviewers, and is currently operating
mostly for the public universities in the region. Its main contribution
has been the introduction of a continuing concern for quality and its
regular assessment, and has been the basis for many further develop-
ments in the area of quality assurance.
Following this experience, in 2003 the Central American Council for
Accreditation (CCA) was established, with the dual role of assessing
168 Quality Assurance in Higher Education

quality assurance agencies in the region and promoting quality assurance


initiatives. It brings together governments, public and private higher
education institutions and professional associations, and has had a
strong role in the development of assessment practices and capacity
building.
In addition, specialised agencies with a regional focus have been
established: ACAI, for Engineering and Architecture; ACESAR for food
and agricultural programmes; ACAP, for graduate programmes, and
ACAFEM, which is in the process of developing standards for Medical
Studies.

The establishment of a regional network


In 2003, national quality assurance agencies, representatives from
governments involved in higher education and several university
associations decided to set up a regional network dedicated to the pro-
motion of quality assurance. This involved 18 Latin American countries
plus Spain, and evolved into the Iberoamerican Network for Quality
Assurance in Higher Education (RIACES).
Its main purposes were as follows:

• To promote the development of quality assurance agencies and of


assessment and accreditation processes in member countries;
• To support the development of accreditation mechanisms leading to
the recognition of degrees, student, academic staff and professional
mobility and to the educational integration of the region;
• To involve national and international cooperation organisations in
the promotion of quality assurance systems; and
• To analyse the development of higher education in order to make
quality assurance an effective tool for the continuing improvement
of higher education institutions.

After almost a decade of operation, RIACES has made it possible to


develop a regional community for quality assurance, and has contributed
to capacity building at different levels: existing and emerging agencies,
academic staff within higher education institutions, reviewers and
policy-makers. It has supported the establishment of new quality assu-
rance mechanisms in several countries, and has worked with national
agencies towards the harmonisation of standards and procedures for the
assessment of undergraduate and graduate programmes.
It also has established strong links with INQAAHE, the global quality
assurance network; this association makes it possible to approach quality
Quality Assurance in Latin America 169

assurance from a double perspective – that of national and regional


priorities and concerns, within a shared cultural context, and the global
one, which helps put national and regional work in a much wider
context – and to work together with other regions of the world in
matters of common interest.

Impact at the university level: report from


an international study

Frame of reference for the project


Latin American countries have made a strong effort to develop and
implement quality assurance systems. As seen above, most countries
have worked on it, accredited a number of programmes and institu-
tions, trained a large number of people, both as participants in self
assessment exercises and as external reviewers, and invested significant
amounts of money and energy in the process. It became unavoidable to
ask whether this effort had been worthwhile, and if it was possible to
identify specific areas of impact.
An international project, involving 25 universities from 17 countries
in Latin America and Europe, and funded by the European Union
through its ALFA programme, studied perceptions about the impact of
quality assurance in seven countries: Argentina, Chile, Colombia, Costa
Rica, Mexico, Spain and Portugal (Lemaitre and Zenteno, 2012).3
It was clear from the beginning that it would be impossible to isolate
the effect of quality assurance from other policies and actions; therefore,
it was decided to ask university internal and external stakeholders about
the changes in the higher education system and within institutions,
and whether these changes could be linked to the quality assurance
processes in place. Therefore, the study was mostly of the perceptions
of a wide range of stakeholders, mostly internal, but with some inputs
from external stakeholders as well.
One of the first tasks of the project was to agree on how quality was
going to be defined. The operational definition all members agreed
on was to look at quality from a double perspective. The first was
that of external consistency, that is, the capacity of an institution to
respond to the demands of the relevant external environment: the
academic, disciplinary and professional requirements linked to the
programmes being offered; socio-economic considerations (norms,
characteristics of the students, priority areas for national development);
and local and global labour markets. The second perspective deals with
170 Quality Assurance in Higher Education

internal consistency, or the ability of the university to recognise its


institutional principles and priorities, and to organise its work (including
the selection of relevant external demands) accordingly, and to respond
effectively to them. In practice, this means putting together the clas-
sic definition of fitness for purpose with the requirement of fitness of
purpose.
Figure 11.1 represents graphically the way in which a university’s
actors, resources and processes are linked in response to internal and
external requirements:

Institutional mission and vision

Insternal
consistency Academic
field
(disciplinary
Institutional /professional)
management and
Academic staff
decision making Socio-
processes economic
environment

Academic Academic Labor market


Students (local and
processes products
global)

Resources External
consistency

(Adapted from J.R. Toro)

Figure 11.1 An operational definition for quality in higher education

The project focused mostly on the institutional level, but it also


gathered information related to the higher education system. At the
institutional level, two main dimensions were considered: institutional
management and the teaching and learning process.
Table 11.2 shows the process used for gathering data.

Results for Latin America


This chapter focuses on the results from Latin American, since quality
assurance development and implementation in Europe was mostly an
effort to adjust to the requirements of the Bologna agreement, with
student and professional mobility as its main drivers.
Quality Assurance in Latin America 171

Table 11.2 Respondents and data collection mechanisms

Respondents Method for collecting Sample size


information

Governmental authority Semi-structured 1 per country ⫽ 5


interview
Leader of national Semi-structured 1 per country ⫽ 5
quality assurance agency interview
Academic Vice Rector Semi-structured 1 per university ⫽ 22
interview
Director of Planning Semi-structured 1 per university ⫽ 22
interview
Dean or Leader of a Semi-structured 1 per university ⫽ 22
Faculty interview
Head of Department Semi-structured 1 per university ⫽ 22
interview
Head of quality assurance Semi-structured 1 per university ⫽ 22
at the university interview
Academic staff Focus grous 2 focus groups x
university
6–10 participants in
each ⫽ 350
Leaders from professional Semi-structured Representatives from 4–5
associations interview professional associations
per country ⫽ 23
Students in 3rd – 8th Questionnaire 100 per university ⫽ 2,200
semester
Graduates Online questionnaire 50 per university ⫽ 1,100

A short analysis by respondent


In all countries, governmental authorities recognise the importance of
quality assurance, but have little information about its features or its
potential.
Within institutions, there are significant differences in the views of the
institutional leaders (mostly academic vice rectors and directors of plan-
ning) with those at a lower level (heads of department, academic staff
and students). Deans provide a very interesting link between the two
groups, with a clearer insight into the changes both at the managerial
and the teaching and learning levels. The institutional leaders have a
positive view of internal quality assurance, but while they recognise that
there have been significant changes, especially in institutional manage-
ment, they tend to associate them with their own work rather than
172 Quality Assurance in Higher Education

with public quality assurance policies. They do not see any significant
improvements in teaching and learning, and in many cases they link
quality assurance practices to restrictions to innovation and complain
about a lack of consideration of institutional differentiation.
Leaders at the faculty and programme level, academic staff and
students, on the other hand, value highly the norms and practices of
quality assurance. They associate them with increased achievement of
stated goals and improvement in the quality of the service rendered by
the institution. In a striking contrast with the views of academic vice
rectors, they report significant improvements in the teaching and learn-
ing process.

Analysis by dimension
Impact on the higher education system All respondents recognised formal
quality assurance processes as significant regulatory mechanisms;
although in several countries, participation in accreditation is volun-
tary, it is encouraged through the use of different incentives (access to
public resources, restrictions to public employment of graduates from
non accredited programmes). At the same time, many respondents
emphasised that incentives must be followed closely to reduce the risk
of unanticipated effects.
There is a clear consensus on the overall positive impact of quality
assurance in the development of higher education: an increased con-
cern about the development of valid and reliable information systems,
which still need to be fine-tuned in order to take into account the actual
needs of different stakeholders. The provision of public information
is seen by all respondents as the duty of the government, with little
consideration of the institutional responsibilities associated with this.
In part this may be related to the perceived risk of marketing and
publicity being presented as information, and to the need to regulate
the information provided to the public.
There is a strong criticism of the application of homogeneous
standards to different types of institutions, thus not considering the
importance of different goals and objectives, functions, target groups or
general operational conditions. It is interesting to note that this criti-
cism is voiced simultaneously by public and private institutions.

Institutional management Institutional leaders emphasised that the


need to adapt to quality assurance processes has led higher education
institutions to make important changes to their organisational structure.
These changes are mostly aligned with an effort to institutionalise the
Quality Assurance in Latin America 173

design, control and planning of quality assurance processes, and have


resulted in many cases in an increased centralisation of decision-making
processes, which tend to be in the hands of professional managers.
Academic staff saw in this an increasing risk of bureaucratisation and
the emergence of a managerialistic approach that would reduce the
weight of academic considerations in university management.
At the same time, most respondents linked quality assurance
processes with an increased recognition of teaching as a key function
in universities; in many cases, this has begun to be translated into
the development of new evaluation and promotion mechanisms for
academic staff, although it also became clear that this is still in its very
initial stages.
Respondents at different decision-making levels reported that there
is a clear improvement of information systems within universities,
and that these are increasingly used as the underlying justification for
most decisions. In many cases, the assumption was that more infor-
mation would necessarily translate into better management, but this
seems only to be true if there is a clear understanding of the type of
information that is truly relevant for decision-makers at the different
institutional levels. In fact, academic staff complained that gathering
and providing the required data was a very heavy load that fell mostly
to them, and that, with few exceptions, the processed information did
not find its way back to them. This suggests that there is a need to
identify what information is really needed to improve decision-making
processes, and the kind of processing and analysis that is necessary for
it to be useful to different levels of decision-makers.
Finally, most respondents mentioned that quality assurance had trig-
gered changes in the criteria and practices for hiring academic staff; while
these were an improvement over previous practices, they mostly focused
on formal aspects (such as the level of qualifications, in response to the
use of quantitative indicators) rather than on more substantive elements
that were better linked to the actual quality of teaching or research.

Teaching and learning Academic staff reported significant changes in


the formulation of expected learning outcomes, curriculum design and
updating of study plans as a result of participation in quality assurance
processes. The increased availability of information on student progress
and follow up of graduates – directly related to the demand for effective-
ness in quality assurance standards and procedures – has contributed to
a stronger concern about the relevance of programmes and of the need
to improve teaching strategies, methodologies and practices.
174 Quality Assurance in Higher Education

Internal stakeholders, particularly those directly involved in the


teaching process – heads of department, lecturers, students – greatly
valued the focus that quality assurance processes place on teaching
and learning, but considered that they tend to focus mostly on formal
aspects and indicators, without paying attention to more substantive
issues, or to have a strongly prescriptive focus, emphasising certain
practices that may not be adequate in specific contexts.
With regard to teaching strategies and practices, academic staff saw
a direct influence from quality assurance in basic changes, such as
improved reading lists and materials, or updated and better teaching
resources. At the same time, they reported that the need to review
teaching and learning within the context of programme accreditation
has contributed to the introduction of innovative practices, such as the
use of ICT, competency-based teaching and new assessment methods
(even though they recognise that changes in the assessment of learning
are still more of an expected outcome than actual practice).

Lessons from the study

Among the results of the study are a number of insights that can be
useful in the future development of quality assurance arrangements, for
governments, higher education institutions and quality assurance agen-
cies. These are summarised in the following paragraphs.

For system-wide policy-making


Quality assurance is considered an important policy instrument, and
as such, it should be aligned with other policy instruments (such as
other regulatory measures or funding arrangements). In most countries,
quality assurance is voluntary, and promoted through incentives; their
operation, however, should be closely monitored, in order to avoid
unanticipated effects (which were apparent in several of the countries
studied).
It was interesting to note that the focus of quality assurance produces
quite different results. Institutional accreditation seems mostly linked
to governance or managerial changes; while many of the respondents
tended to place the origin of managerial changes in their own policy
decisions or actions, it seems highly likely that they were induced by
the institutional accreditation processes in place. Changes in teach-
ing and learning, on the other hand, seem mostly associated with
programme accreditation; this suggests that a focus on programmes or
at least some way to involve academic staff, students and graduates in
Quality Assurance in Latin America 175

assessment processes is an important component of both feasible and


visible improvements.
The insistence on the responsibility of the government to provide
valid, relevant and reliable information to a wide range of stakeholders
must also be taken into account, especially in those systems where the
market plays a substantive role and information may be confused with
marketing or publicity strategies.

For universities
Information about institutional operation is important for improved
management. At the same time, there is a significant amount of work
involved in gathering, processing and updating data. Therefore, it seems
important to determine the kind of information needed to support
decision-making at the different institutional levels, and the cost-benefit
of its provision.
Improved management also includes the development of internal
quality management mechanisms: these involve linking assessment
results with programme and institutional planning, as well as embed-
ding the assessment of inputs, processes and outcomes into regular
institutional practice. In doing this, it becomes easier to recognise that
quality is a shared responsibility, and to involve internal stakeholders in
quality improvement processes.

For quality assurance agencies


Lessons for quality assurance agencies can be organised into three areas:
procedures, external reviewers and standards.
Regarding procedures, a clear message from the study is the risk of
an increasingly bureaucratic and formal approach from most quality
assurance agencies; this makes it harder for universities to see qua-
lity assurance as an improvement-oriented mechanism, emphasising
its controlling and regulatory aspects. In part, this could be improved
through the establishment of better opportunities for participation of
higher education institutions and their academic staff in the revision
of norms, criteria and procedures. Assessment processes and decisions
would then be more easily accepted and legitimised, and standards and
procedures could be made more relevant and fine-tuned to the actual
operation of higher education institutions.
External reviewers are one of the more complex aspects of quality
assurance. While there is a recognition that in many cases they are
highly qualified, complaints point to biases, lack of academic judgement,
or inconsistent reports. Clear selection criteria, good training processes
176 Quality Assurance in Higher Education

and the evaluation of the performance of external reviewers, as well as


the development of useful and effective manuals and guidelines, are
important, and should become a priority for quality assurance agencies.
In terms of standards, beyond what has already been said about the
need to consider institutional diversity, standards for the assessment of
teaching and learning should move away from formal and procedural
indicators, and focus on more substantive issues, taking into account
the different components of the teaching process. In many cases, it
was reported that agencies tended to prescribe specific actions, such as
curricular approaches, which were not necessarily aligned with institu-
tional priorities.

Final comments

Latin American quality assurance schemes have been in place for two
decades, and have developed to respond to the needs of national higher
education systems, with different and relevant models. The experience
gathered during this time has been shared through the work of the
regional quality assurance network, RIACES, which has provided very
important opportunities for shared learning, for capacity building, and
most of all, for the development of a quality assurance community in
the region.
The overall view of universities in those countries with longer experi-
ence and more consolidated quality assurance processes is that these
processes have been effective, and have contributed significantly to the
recognition and improvement of increasingly complex and diversified
higher education systems.
At the same time, it is clear that the growth and development of
higher education, increased enrolment and institutional differentiation
pose new challenges that must be addressed by institutions and taken
into account in the revision of quality assurance processes.
The study that has been briefly reported in this chapter points to
significant lessons that can contribute to improved policy-making at
the national level; to changes in higher education institutions, both in
terms of new managerial arrangements and in teaching and learning
practices; and, most of all, to the need for updated and revised stand-
ards, procedures and practices of quality assurance agencies.
Higher education is a dynamic system – it cannot be served well
by quality assurance processes which are not ready to learn (and to
unlearn), to adapt and adjust to the changing needs of students, institu-
tions and society.
Quality Assurance in Latin America 177

Notes
1. ARCU-Sur added Veterinary Medicine, Dentistry, Nursing and Architecture to
the three initial programmes.
2. In addition to the original six countries, Venezuela, Peru, Ecuador and
Colombia were also included in ARCU-Sur.
3. The full report (in Spanish) can be found at www.cinda.cl: Aseguramiento de
la Calidad en Iberoamerica, Educacion Superior Informe 2012. A summary in
English can be obtained from cinda@cinda.cl.
4. The sample included four universities in each country, except Mexico, where
six universities were included.

References
Brunner, J.J. and Ferrada, R. (2011) Educación Superior en Iberoamérica. Informe
2011 (Santiago de Chile: CINDA).
CINDA (2007) Educación Superior en Iberoamérica. Informe 2007 (Santiago de Chile:
CINDA).
Ewell, P.T. (2008) U.S. Accreditation and the Future of Quality Assurance: A CHEA
Tenth Anniversary Report (Washington, DC: Council for Higher Education
Accreditation).
Lemaitre, M.J. and Mena, R. (2012) ‘Aseguramiento de la calidad en América
Latina: tendencias y desafíos’, in M.J. Lemaitre and M.E. Zenteno (eds),
Aseguramiento de la Calidad en Iberoamérica. Informe 2012 (Santiago de Chile:
CINDA).
Lemaitre, M.J. and Zenteno, M.E. (eds) (2012) Aseguramiento de la Calidad en
Iberoamérica. Informe 2012 (Santiago de Chile: CINDA).
OECD (2008) Tertiary Education for the Knowledge Society, Vol. 1 (Paris: OECD).
Part IV
Quality Assurance: The Actors’
Perspectives on Recent Trends
12
The Academic Constituency
Maria João Rosa

Introduction1

Academics are at the heart of higher education, as key actors in the


assurance and promotion of the quality of its institutions and systems.
Furthermore academics’ support of and commitment towards quality
assurance systems, mechanisms and models tends to assume a major role
in their successful implementation. This chapter explores academics’
perspectives on higher education quality assurance using the answers
given by a sample of Portuguese academics to a questionnaire designed
to investigate their degree of support towards quality assessment goals
and purposes.2
Contemporary debates on higher education (such as those presented
in this book) point to several transformations in the way higher educa-
tion systems and institutions are governed and managed, as well as in
the role of academics. The literature shows a decline of trust in public
institutions, including higher education institutions (HEIs), as well
as in professionals (Amaral, Rosa and Tavares, 2009). The emergence
of new public management (NPM) and the attacks on the efficiency
of public services have contributed to this loss of trust (Trow, 1996)
and created a demand for more accountability. Furthermore, the mas-
sification of higher education has created a large heterogeneity in the
quality of both students and professors, as well as the emergence of
new institutional forms very different from the elite university (Trow,
1996), also contributing to decreasing trust. Academics have been
facing an erosion of their relative class and status advantages (Halsey,
1992), being progressively pushed from a position of professionals into
one of employees, the new professionals being the managers, whether
academics or not.

181
182 Quality Assurance in Higher Education

This loss of trust has had obvious consequences for quality assurance.
Government and society no longer seem to trust HEIs’ capacity to
ensure adequate standards of quality, seen in the movement from less
intrusive forms of quality assurance to accreditation (Amaral and Rosa,
2011). This can be seen as corresponding ‘to a change from a cycle of
trust and confidence in institutions into a cycle of suspicion’ (Rosa and
Amaral, 2012, p. 114).
Two other developments have contributed to the implementation of
‘harder’ forms of quality assurance. On one hand, the increasing use
by governments of market-like mechanisms as instruments of public
regulation (Dill et al., 2004) implies resorting to tools such as licensing,
accreditation and the public disclosure of the results of quality assess-
ment for consumer information (Smith, 2000). And at the level of the
European Commission steps are being taken to promote the develop-
ment of rankings and classification tools that are quite removed from
the academic endeavour. It remains to be seen if the quality enhance-
ment movement, another recent development taking place in European
higher education, which intends to devolve to HEIs the responsibility for
promoting education quality, will be capable of re-establishing societal
trust in institutions (Rosa and Amaral, 2012).
In Portugal all these trends are evident when one analyses the history
of quality assurance, which can be summarised in two major phases
(Rosa and Sarrico, 2012). The first (1993–2005) is marked by study
programmes’ assessment, mainly oriented towards quality improvement,
under the responsibility of entities representing HEIs (public and private
universities and polytechnics). An umbrella organisation – the Higher
Education Assessment National Council – coordinated the national
quality system and cooperated with those entities, being responsible
for the system’s meta-evaluation. The second phase, initiated in 2006
under the influence of European developments (namely, the Bologna
Declaration and compliance with the Standards and Guidelines for
Quality Assurance in the European Higher Education Area – ESG), is
characterised by the establishment of a system of assessing and accred-
iting study programmes and institutions (Law 38/2007) and of a new
and independent body for its coordination – the Higher Education
Assessment and Accreditation Agency.
The new system has been in operation since 2009 and accredita-
tion assumes a leading role within it as a way of ensuring that study
programmes and institutions accomplish the minimum requirements
for their official recognition. The new legal framework for quality
evaluation and accreditation also determines that institutions should
The Academic Constituency 183

develop a quality assurance policy for their programmes, a culture of


quality and quality assurance for their activities and a strategy for their
quality continuous improvement. Most Portuguese HEIs have deve-
loped or are developing their own internal quality assurance systems
(Amaral, Rosa and Fonseca, 2013). Furthermore, institutional audits of
internal quality assurance systems are anticipated in the law, and a pilot
project has been started to conduct these audits in an experimental way.
Following Laughton (2003), one assumes that academics’ support for
quality assurance (namely regarding its goals and purposes) is essential
to its successful implementation and is a factor in the accuracy and
meaningfulness of the results achieved (Cardoso et al., 2013). In view
of the recent trends, it was decided to explore the general positions of
academics towards quality assessment, namely their receptivity to the
different possible goals and purposes it may have. A research project was
then designed (Perceptions of Higher Education Institutions and Academics
to Assessment and Accreditation) to investigate Portuguese academics’ per-
ceptions of quality assessment goals and purposes, in general, and the
recently established Portuguese quality assessment and accreditation
system, in particular. This chapter describes and discusses part of the
results of this research project, namely by trying to answer the following
research questions:

• Do Portuguese academics’ perceptions translate into a more resistant


or a more supportive position towards quality assessment objectives
and purposes?
• Are these perceptions homogeneous, that is, identical and shared by
all academics? Or do they differ according to disciplinary affiliation,
type of institution, gender and experience with QA activities?
• What factors do academics think are effectively promoting quality
in Portuguese HEIs?

One expects the analysis presented will contribute to the development


and design of quality assurance systems that academics are more willing
to support and that therefore have a greater chance of contributing to
the quality improvement of higher education systems and institutions.

Academics resistance and adhesion towards


quality assurance

Academics’ perspectives, attitudes and positions towards quality assur-


ance still constitute a relatively underdeveloped subject in the research
184 Quality Assurance in Higher Education

on this mechanism of State regulation (Lomas, 2007; Nasser and Fresko,


2002; Newton, 2000; Westerheijden et al., 2007). Nevertheless, it is
possible to argue that academics’ positions tend to translate into different
degrees of acceptance, support and adaptation to the idea, policies and
implementation procedures of quality assurance (Cartwright, 2007;
Newton, 2002; Watty, 2006; Westerheijden et al., 2007). In general,
such positions can be classified as ‘intransigent’ or minimalist (involved
to a minimum degree, almost as a burden and obligation); ‘colonised’
or ritualistic (involved as a routine, as something symbolic, with no
contestation); ‘converted’ (conforming, though enthusiastically; that is,
through conversion rather than mere compliance); ‘rational’ adaptation
(engaged with trying to follow procedures and obtaining gains from
this, directly or indirectly related to quality); ‘pragmatic’ scepticism
(sceptical but with adaptation, engagement oriented by a concrete
and procedural orientation); ‘sinking’ (confused but resigned about
the workload involved); ‘coping’ (having a strong sense of quality as a
burden but dealing well with it); and ‘reconstructing’ (assuming an
active role through the creation of new practices or a role within quality)
(Newton 2002).
Academics tend to see quality assessment and assurance as being
accountability-led as opposed to improvement-led (Newton, 2000),
and therefore ‘alien to the core values of academic culture’ such as self
and collegial accountability and self improvement (Laughton, 2003,
p. 317). They seem to be sceptical of both internal and external quality
assessments, since these processes tend to ‘generate reports but do not
engage with the heart of the academic endeavour’ (Harvey, 2009, p. 1).
Lomas (2007) and Watty (2006) argue that for academics quality is
mainly linked to assurance (that is, fitness for purpose and conformity
with external standards) rather than with enhancement and transfor-
mation (that is, dissemination of good practices having improvement
as an aim, namely at the teaching level). And for Papadimitriou and
colleagues (2008) academics see quality assurance emerging as an
instrument much more associated with the establishment of thresh-
old quality in higher education than with enabling institutions and
academics to go beyond such a threshold. Academics’ apprehension
towards quality assurance also seems related to its perceived impact.
Quality assurance is seen as capable of producing unintended conse-
quences for personal and organisational behaviour and as stimulating
inspection, regulation and standardisation more than addressing perti-
nent issues for academic staff (Laughton, 2003; Morley, 2005; Newton,
2002; Stensaker et al., 2011).
The Academic Constituency 185

Academics also rail against the way they see quality assurance being
implemented, namely the administrative and cost burden they tend to
associate with it, as well as with the fact of it being time consuming
(Laughton, 2003; Lomas, 2007; Newton, 2010; Papadimitriou et al.,
2008; Stensaker, 2008; Stensaker et al., 2011). They complain about the
high bureaucracy involved in quality assurance, the lack of time to deal
with its requirements and, inherently, the diversion of their attention
from the really important aspects of academic life, namely teaching and
research (Luke, 1997; Harvey, 2006; Newton, 2002).
Quality assurance’s core values also tend to be resisted by academics,
inducing non-compliance or even ‘sabotage’ of the process. To some
extent this can be explained by academics’ perception of the process as
being based on imposition and prescription and, thus, clashing with the
values that characterise academic culture, namely academic freedom
(Laughton, 2003; Lomas, 2007; Luke, 1997). Quality assurance is seen
as trying to grasp the ‘academic world’ through the language and ideo-
logy of managerialism and its business ethos, undermining academics’
‘privileged position through a new form of regulation’ (Bell and Taylor,
2005; Laughton, 2003, p. 317). It is also seen as altering the traditional
relationship between academics, inducing a situation where they relate
to each other more as ‘managers’ and ‘managed’ than as colleagues
(Bell and Taylor, 2005). These perceptions, especially evident among
academics not performing management tasks, often lead to the adop-
tion of instrumental and ritual strategies (conforming behaviours) to
‘keep the system running’ rather than truly to engage in it (Cartwright,
2007; Newton, 2000, 2002; Watty, 2006).
Furthermore, academics tend to be dissatisfied with quality assurance
procedures. These are seen as not entirely reliable, reductionist and inca-
pable of grasping the ‘essence’ of the educational process (Cartwright,
2007; Laughton, 2003; Westerheijden et al., 2007). Although this nega-
tive perception of quality assurance tends to dissipate whenever there is
the ‘impression that education is valued and rewarded’, ‘few have this
impression’ (Westerheijden et al., 2007, p. 307). Additionally, academics
also tend to show a lack of agreement on quality assurance’s capability
to induce improvements in their immediate working environment
(Watty, 2006).
Finally, academics’ resistance relates to two other issues. On the one
hand is the possibility of quality assessment results not being truthful
(Laughton, 2003; Westerheijden et al., 2007), since they tend to be
inflated (including by academics) and, thus, artificially influential
on the quality of a basic unit or institution; on the other hand is the
186 Quality Assurance in Higher Education

possibility that those results lead to an elitist bias within the higher
education system, with a tendency for the richest and oldest universities
to achieve a better position (Laughton, 2003).
But academics also adhere to quality assurance. This is especially true
in the case of processes and procedures directed more at institutions
as a whole, which are seen as less ‘burdensome and intrusive’ than
those directed at academics’ overall performance (Laughton, 2003,
p. 319). Academics, especially those assuming managerial roles, also
tend to agree with accreditation, seeing it as providing the opportunity
for institutions to reflect on their mission and purpose and ‘to join an
elite club’ (Bell and Taylor, 2005, p. 248). This occurs because accredita-
tion results can have an impact on institutions’ social image, playing a
preponderant role in students’ demand for and choice of an institution
or study programme.
Quality assurance is also seen by academics as enabling the develop-
ment of teaching and learning quality (namely educational provision
and curricula), hence benefiting students, as well as academic work
and decision-making processes (Huusko and Ursin, 2010; Kleijnen
et al., 2011). The improvement prompted by quality assurance on these
areas seems to be related to the fact that it enables ‘fair institutions’ as
its procedures ‘can expose flaws’ in institutional practices ‘promoted by
nepotism, patronage and gendered sponsorship’ (Morley, 2005, p. 423).

Method and data

A research project was designed to understand the positions assumed


by academics towards quality assessment, in general, and the new
Portuguese quality assessment and accreditation system, in particular.
Part of the empirical data supporting the project derives from an online
questionnaire sent to all Portuguese academics, with the goal of collect-
ing these actors’ perceptions (Rosa et al., 2012; Cardoso et al., 2013).
The available data shows that Portugal, in 2009, had a population
of 36,215 academics (GPEARI/MCTES, 2010). Overall 41 per cent
belong to public universities, 28 per cent to public polytechnics,
19 per cent to private universities and 12 per cent to private polytechnics.
Furthermore, 56 per cent of Portuguese academics are male. The highest
percentage is found in the 40–49 age group (34 per cent), followed by
the 30–39 (29 per cent) and 50–59 (23 per cent) groups.
A census was chosen as the data collection strategy. All Portuguese
HEIs (more precisely their rectors and presidents) were approached and
asked to distribute information on the research project, including a link
The Academic Constituency 187

to the online questionnaire, among their academics, asking them to


answer it. In all 1,782 academics answered the questionnaire (a response
rate of around 5 per cent). The analysis of the characterisation data
(excluding missing data) reveals that the respondents’ sample comprises
academics from the university and polytechnic sectors, both public
and private, although with an overrepresentation of the public system
(90 per cent), especially public polytechnics (45 per cent), in relation to
the academic population in Portugal. Since the sample obtained was not
representative of the population of Portuguese academics it was decided
to refer only to the answers of academics belonging to the public sector,
with a random post-stratification weighting also conducted in order to
have a sample with proportions of polytechnic/university, male/female
and age groups equivalent to those in the population of Portuguese
public sector academics.
The sample used for the analysis consists then of the responses of
653 academics from public HEIs, which corresponds to 2.6 per cent
of all public sector academics. Although this percentage is low, posing
some limitations regarding the generalisations of results that can be
made to the overall public sector population, the size of the sample and
the detail of the data gathered provide a rich source for exploring how
Portuguese academics from the public sector perceive higher education
quality assessment.
In summary, the sample comprises 385 academics from the public
university subsystem (59 per cent) and 268 from the public polytechnic
subsystem (41 per cent). Additionally, it includes both female (40 per cent)
and male (60 per cent) academics, reflecting the characteristics of
the Portuguese public sector academic population. Furthermore, the
academics included in the sample belong to different scientific areas
(although mostly from engineering and technology (30 per cent) and
social sciences (25 per cent)). In addition, 25 per cent of respondents
have been involved in quality management activities in their institu-
tions. Table 12.1 presents the study sample characterisation.
One of the goals behind the questionnaire’s design was to investigate
academics’ perceptions of the intended goals and purposes of a quality
assessment system. Quality assessment goals have been defined as the aims
that can be accomplished through the implementation of a quality
assessment system, while quality assessment purposes assemble a set of
intentions that may lie behind the development of a quality assessment
system (Cardoso et al., 2013). In the questionnaire, a set of statements
tentatively tried to operationalise such goals and purposes, based on
the results of a literature review (Veiga et al., 2013; Papadimitriou
188 Quality Assurance in Higher Education

Table 12.1 Sample characterisation

No. of % of
Academics Academics

Gender Male 390 60


Female 263 40
Type of Higher Public university 385 59
education Institution Public polytechnic 268 41
Experience in QA Yes 161 25
activities No 488 75
Disciplinary affiliation Natural Sciences 86 13
Engineering and Technology 192 30
Medical and Health Sciences 94 14
Agricultural Sciences 44 7
Social Sciences 163 25
Humanities 69 11

et al., 2008; Langfeldt et al., 2010). Academics were asked to signal


their degree of agreement with each one of the statements on a Likert-
type scale (from 1 ‘Totally disagree’ to 5 ‘Totally agree’). They could
also choose the option ‘I do not know’. Descriptive statistics and
hypothesis tests (t-Student tests, and Kruskal-Wallis non-parametric
tests for a 0.05 significant level) were used to analyse the answers,
making it possible to investigate academics’ support for the different
goals and purposes quality assessment may have, as well as to identify
the existence of statistical significant differences between different
groups of academics.
Additionally, academics’ answers to one open question on the fac-
tors promoting quality on their own HEIs were subject to a preliminary
content analysis, making it possible to identify a set of categories and
subcategories grouping these factors. The main goal was to go a step
further and let the academics express themselves about what in their
opinion really matters to the promotion of quality within HEIs.

Academics’ perspectives on quality assessment

The Portuguese academics’ perspectives on quality assessment are now


presented based on the answers they gave to the questionnaire. Overall
they tend to support both the proposed goals and purposes of higher
education quality assessment, although the degree of support is higher
when these are linked to quality improvement.
The Academic Constituency 189

Overall perspectives on quality assessment goals and purposes


All five goals for higher education quality assessment are to some extent
supported (Table 12.2); nevertheless the continuous development of higher
education quality is the one that collects a higher agreement. In contrast
the establishment of a system of penalties and rewards based on quality
assessment results collected the least support.
Academics also tend to agree or totally agree with all possible pur-
poses for higher education quality assessment, although the level of
agreement is higher when they are linked to HEIs’ and academics’
improvement (Table 12.3).
The different purposes suggested to academics have been tentatively
classified into five different types, according to the typology proposed
by Rosa et al. (2012): improvement, innovation, communication, motivation
and control (column two in Table 12.3 presents this classification). Since
purposes are presented in descending order of agreement by academics
(higher to lower) in the table it is evident that support tends to be

Table 12.2 Academics’ perceptions of different higher education quality assessment


goals

Assessing the quality of N (1) (2) (3) (4) (5) Mean Median
higher education has
the following goal:

The continuous 639 3% 5% 11% 26% 55% 4.3 5


development of higher
education quality
Accountability towards 629 7% 10% 24% 31% 28% 3.6 4
society regarding higher
education quality
The development of 610 10% 11% 27% 29% 23% 3.4 4
innovations generated
within HEIs
The higher education 585 8% 14% 32% 28% 18% 3.3 3
system’s adequacy to
European rules
The establishment of a 613 18% 19% 27% 22% 14% 2.9 3
system of penalties and
rewards based on quality
assessment results

Note: N – number of answers. Answers collected on a five-point scale: (1) totally disagree; (5)
totally agree; (3) neutral.
190

Table 12.3 Academics’ perceptions of different higher education quality assessment


purposes

Higher education TP N (1) (2) (3) (4) (5) Mean Median


quality assessment
should:

Allow the academic I 646 1% 3% 12% 31% 51% 4.3 5


community to know
and reflect on the
institution’s quality so
strategies to improve it
can be defined
Allow HEI governance I 650 2% 4% 13% 32% 49% 4.2 4
bodies to promote
continuous quality
improvement processes
for teaching and
learning
Contribute to the I 651 2% 3% 12% 38% 43% 4.2 4
collective and shared
identification of the
institution’s strengths
and weaknesses
Promote the creation I 642 2% 3% 15% 37% 42% 4.2 4
of quality assurance
internal systems
Improve the links I 651 4% 5% 14% 33% 42% 4.1 4
between teaching and
research
Increase academics’ M 650 3% 6% 16% 32% 42% 4.1 4
involvement in teaching
and learning
Favour the development I 649 2% 4% 14% 38% 41% 4.1 4
of academics’ individual
skills
Provide students with Com 645 4% 4% 15% 36% 40% 4.1 4
information on the
quality of teaching and
learning so they can
make choices
Allow governance bodies Com 646 2% 3% 19% 35% 39% 4.1 4
to have information on
the HEI’s quality so they
can take decisions
Promote the I 649 3% 5% 15% 38% 38% 4.1 4
improvement of student
support systems

(continued)
191

Table 12.3 Continued

Higher education TP N (1) (2) (3) (4) (5) Mean Median


quality assessment
should:

Publicly assure the Com 647 4% 5% 18% 37% 35% 4.0 4


accountability of a
higher education system
Allow governance bodies In 644 3% 5% 20% 36% 35% 4.0 4
to promote policies for
the development of new
teaching and learning
practices
Contribute to the In 647 3% 5% 23% 38% 30% 3.9 4
definition of new
routines and procedures
Facilitate the adoption In 649 5% 5% 19% 36% 34% 3.9 4
of new methodologies
for teaching and
learning
Encourage institutions M 649 5% 5% 20% 33% 37% 3.9 4
to be concerned with
their reputation or
social image
Contribute to the In 647 5% 7% 18% 36% 33% 3.9 4
convergence of
teaching, research and
management processes
and practices
Reward academics’ In 647 5% 5% 18% 33% 34% 3.9 4
innovative practices
Allow for the closure C 650 5% 11% 20% 23% 38% 3.8 4
of study programmes
that have no quality,
based on their
non-accreditation
Provide information C 649 4% 7% 23% 36% 27% 3.8 4
about the institution to
an external entity, for
accreditation purposes
Allow governance M 646 6% 7% 25% 35% 24% 3.7 4
bodies to define reward
policies for good
practice
Increase students’ M 643 7% 10% 23% 29% 29% 3.7 4
involvement in teaching
and learning issues

(continued)
192

Table 12.3 Continued

Higher education TP N (1) (2) (3) (4) (5) Mean Median


quality assessment
should:

Allow governance bodies M 646 7% 11% 25% 33% 22% 3.5 4


to allocate resources,
based on quality
assessment results
Promote the existence C 648 9% 11% 27% 32% 21% 3.5 4
of control mechanisms
of the performance of
academics
Have effects on the C 644 8% 9% 26% 33% 19% 3.5 4
HEI’s criteria for
academic recruitment
and promotion
Allow the government M 648 10% 12% 29% 30% 18% 3.3 3
to allocate resources to
institutions based on
assessment results
Allow governance M 647 10% 17% 29% 25% 17% 3.3 3
bodies to define
sanctioning policies for
inadequate practice
Have effects on the C 645 10% 12% 29% 26% 15% 3.3 3
HEI’s criteria for student
selection
Promote cooperation M 647 11% 17% 28% 24% 16% 3.2 3
between academic and
non-academic staff
Have effects on the C 645 11% 14% 27% 28% 14% 3.2 3
HEI’s criteria for
non-academic staff
recruitment and
promotion
Provide the State with C 648 14% 15% 30% 26% 14% 3.1 3
instruments to control
the higher education
network

Note: TP – Type of purpose (C – Control; Com – Communication; I – Improvement; In –


Innovation; M – Motivation); N – Number of answers. Answers collected on a five-point
scale: (1) – totally disagree; (5) – totally agree; (3) – neutral).
The Academic Constituency 193

higher in the case of purposes that are classified as improvement and


lower in the case of purposes that are classified as motivation or control.
Analysis of Table 12.3 reveals that a significant percentage of academics
tend to ‘agree’, or ‘totally agree’ with all the statements that reflect
the improvement purpose. This purpose reflects the idea that quality
assessment can be a powerful driver for improvement and that often
it is enough to assess, and that improvements will follow (Rosa et al.,
2012). It seems that Portuguese academics tend to favour quality assess-
ment systems that lead to improvements in quality, translated into the
improvement of teaching and learning processes, the development of
their own skills and competences, or a better link between teaching and
research.
The communication purpose intends to put forward the idea of assess-
ment as a vehicle by which to inform academics or a department about
what the university requires of them or may be responsible for, as an
important means of strategy communication and implementation. At the
supra-institutional level this purpose is linked to transparency and trust,
emphasising the importance of communicating to the public and all
higher education stakeholders that the institution offers quality and
value for money (Rosa et al., 2012). This purpose is the second most
supported by our sample of academics, which leads us to conclude that
Portuguese academics agree with the need to develop mechanisms to
make transparent both to society and institutions the quality level of
higher education systems, their institutions and study programmes.
Regarding the innovation purpose there is a significant level of agreement
by academics on issues that operationalise it. According to Rosa and col-
leagues (2012), innovation differs from improvement essentially because
improvement has to do with taking what exists and developing it further,
while innovation looks beyond what is there and searches for something
new. The question then is to know whether the mechanisms for quality
assessment will promote or hinder innovation. In the case of Portuguese
academics it seems that they do believe quality assessment systems should
contribute to innovation, and search for things such as ‘new academic
practices, new methodologies for teaching and learning’ or new ways to
‘link teaching, research and management processes’ in HEIs.
The motivation purpose relates to the idea of quality assessment influ-
encing the behaviour of academics, which implies the need to have a
framework for quality assessment that encompasses a balanced mix of
criteria that supports the strategic intentions of the HEI (Rosa et al.,
2012). In the case of Portuguese academics, the different aspects covered
by this purpose are among the least supported, with mean scores
194 Quality Assurance in Higher Education

lower than those obtained for the improvement and communication


purposes and more similar to those of the control purpose. These results
raise the question of the possible limitations of a quality assessment
system for influencing the behaviour of academics, unless it promotes
what academics consider higher education core functions, namely
teaching and learning. In fact, the results show that academics are more
in favour of motivation propositions that are compatible with tradi-
tional academic norms and values, such as in the statement ‘increase
academics’ involvement in teaching and learning issues’. Interestingly,
the percentage of academics agreeing that quality assessment should
‘allow institutions’ management and governance bodies to define sanc-
tioning policies for inadequate practices’ is substantially lower than the
percentage of academics that agree with quality assessment allowing ‘the
institutions’ management and governance bodies to define rewarding
policies for good practices’. It seems that, as might be expected, the
‘carrot’ may be a better way to motivate academics than the ‘stick’.
The control purpose reflects the idea of quality assessment as a provider
of feedback so that measures can be taken (Rosa et al., 2012). For this
to happen a control loop is necessary, complete with measures, targets,
means of verification, feedback mechanisms and ways to take appropri-
ate action. The statements classified under this purpose received less
support, although they still collected some degree of agreement. In fact
‘Provide the State with instruments to control the higher education
network’ collected the lowest level of agreement of all. Furthermore,
fewer academics support the idea of quality assessment affecting an
‘HEI’s criteria for student selection or staff recruitment and promotion
(both academic and non-academic)’. Maybe this is due to the percep-
tion, by academics, that an increase in control will lead to the loss of
some traditional academic autonomy in favour of external regulatory
agencies. Not surprisingly, however, academics are less inclined to agree
with control mechanisms relating to individuals’ performance and
more accepting when it comes to institutional and degree programme
performance. The exception seems to be in relation to the definition
of the higher education network, most probably due to the existence,
in Portugal, of recent discussions about the eventual need to close or
merge institutions. Figure 12.1 synthesises the results just presented.

Different academics’ characteristics, different perceptions


Some reported studies have shown that different groups of academ-
ics tend to hold different perceptions on quality assessment (Morley,
2005; Kleijnen et al., 2011; Papadimitriou et al., 2008; Rosa et al., 2006;
The Academic Constituency 195

• All statements reflecting • Aspects covered by the


the improvement purpose motivation purpose are less
Communication
are highly agreed upon by supported - is it possible to
Innovation
academics design a QA system capable of
• Academics favour QA influencing academics’
systems that lead to: • Academics agree with the behaviour?
• improvement of teaching need to make more • Academics are more in favour
and learning processes transparent to both society of motivation if it links with
• development of their and HEIs the quality level of their traditional norms and
own skills and the HE system, its values
competences institutions and study
• better link between programmes • Not so much support for
teaching and research control purposes:
• QA as having effects on
• Academics believe QA HEI’ criteria for student
systems should contribute to selection or staff
innovation in: recruitment and promotion
• new academic practices • QA as a way to provide the
Improvement • new teaching and learning state with instruments to
methodologies control the HE network
• new ways to link teaching,
research and management Motivation
Control

Figure 12.1 Academics’ overall perceptions of quality assurance purposes

Stensaker et al., 2011; Veiga et al., 2013). The same results were found
in the present study: it is possible to identify a number of quality
assessment objectives and purposes for which statistically significant
differences emerge between the answers of different groups of academics
(see Tables A.1 and A.2 in the appendix). These groups of academics
were defined according to their gender, type of institution, disciplinary
affiliation and previous experience with quality assurance activities.
Gender determines differences between responses to the goals and
purposes of higher education quality assessment for 2 out of the 5
proposed goals, as well as for 10 out of the 30 possible purposes. Most
of the questions for which differences have been identified relate to an
idea of quality assessment privileging the improvement and innovation
purposes and in all cases female academics tend to show a higher agree-
ment. Possible reasons for the identified differences may lie, as proposed
by Cardoso and colleagues (2013), in an essence of quality assurance
that somehow replicates the social gender of women as caregivers
(Morley, 2005); in the fact that women, who generally have less power
in academia than men, may use quality assurance as a way to enhance
their rights and power (Luke, 1997; Morley, 2005); and in an idea of
quality assurance as a process capable of promoting fairer institutions,
namely by helping solve potentially harmful practices and promoting
equity (Luke, 1997; Morley, 2005).
196 Quality Assurance in Higher Education

Differences are also detected between academics who belong to


different types of institutions. More positive positions are assumed by
academics from polytechnics (for two out of the five goals; and for nine
out of the 30 purposes). Only when they are asked about their degree of
agreement with quality assessment as having the purpose of ‘allowing
for the closure of study programmes that have no quality, based on their
non-accreditation’ do academics from universities give a more positive
perspective. Although it is difficult to identify patterns of response
in this case, one can outline that these differences are mostly linked
to purposes of control and motivation. In this case the differences
might be explained by the fact that polytechnics are generally younger
institutions. In fact, Rosa and colleagues (2006), in a study of the institu-
tional consequences of quality assessment, concluded that rectors from
younger universities tended to be more optimistic about the Portuguese
quality system in place from 1993 to 2005 and its consequences than
those from older universities.
Disciplinary affiliation is the characteristic that determines most
differences among responses to the purposes of quality assessment
(16 out of 30). In the majority of these cases academics from Agricultural
Sciences, Humanities and Medical and Health Sciences present a higher
agreement. Lower agreement comes from academics belonging to
the Natural Sciences and Engineering and Technology. In the case
of those statements that reflect possible quality assessment goals,
only in the case of ‘the development of innovations generated within
higher education institutions’ was it possible to identify a statistically
significant difference between the academics’ answers, with those from
the Humanities showing a higher agreement. An explanation of why
disciplinary areas determine such a number of differences between the
responses of academics may be found in the works of Clark (1983) and
Becher and Trowler (2001). Clark (1983, p. 30) claims that ‘the disci-
pline rather than the institution tends to become the dominant force
in the working lives of academics’, while Becher and Trowler (2001,
p. 23) note that the disciplinary culture constitutes the framework that
lends ‘coherence and relative permanence to academics’ social practices,
values and attitudes’.
For seven out of the 30 purposes statistically significant differences
were found between academics with and without previous experience in
quality assurance activities. For six of them the higher agreement comes
from those academics with experience. Interestingly, however, academics
without experience tend to show a higher agreement level with the
quality assessment purpose ‘reward academics’ innovative practices’.
The Academic Constituency 197

Although it was not possible find previous research dealing with exactly
the same type of respondents, both Stensaker and colleagues (2011)
and Rosa and colleagues (2006) highlight that institutional leaders and
managers usually have a more positive view of quality assurance pro-
cesses and mechanisms. Since these are typically the institutional actors
more involved with quality assurance, it seems that our results make
sense: previous experience with quality assurance may contribute to a
more optimistic view of it.

What factors promote quality in HEIs?


Data was also collected on academics’ perceptions of the factors that
promote quality at the level of their basic units (schools and depart-
ments). A preliminary content analysis of the data was performed, and
the results can be systematised in three groups that express their nature
or type: institutional dynamic; institution’s mission; and institution’s
actors.

Institutional dynamic
This first group of factors that promote quality relates to aspects linked
to the HEI internal operation, including governance and management;
facilities, resources and services; and quality culture.
Governance and management includes factors related to the institu-
tion’s leadership, such as its legitimacy and the fact of having been
democratic elected, the capacity leaders and institutional managers
demonstrate for fulfilling their duties, or the institution’s internal
cohesion. Also referred to by academics were factors related to the insti-
tution’s governance system, namely the knowledge, skills and motiva-
tion towards quality assurance that governance and decision-making
bodies have, as well as the way an institution is internally organised.
Additional factors that promote quality and have been included in this
category include the management strategies and procedures defined for
the institution, including definition of goals, clarity, transparency and
the participation of staff in the institution’s functioning and internal
management decisions. Appreciation and recognition of academic staff
is the final issue put under governance and management; this refers to a
clear emphasis being placed on academics’ recruitment, promotion and
on the autonomy and academic freedom they hold.
The physical facilities and the resources the institution has (library,
scientific databases, technological and information resources), as well as
the internal organisation and the efficiency of the services it provides, are
also factors referred to by Portuguese academics as contributing to quality.
198 Quality Assurance in Higher Education

Lastly, academics identified a set of factors linked to the development


of a quality culture within their institution that related to both the
existence of an internal quality assurance commitment and internal
mechanisms for assuring quality (such as the use of students’ pedagogic
surveys, the promotion of a continuous reflection on teaching and
learning quality or the implementation of a quality system), and the
implications of external quality assurance systems and processes for
internal quality improvement.

Institution’s mission
A second group of factors promoting quality at the level of basic units
relates to the core aspects and activities of HEIs’ mission, namely teach-
ing and learning and research. Teaching and learning encompasses the
factors referred to by academics that address the pedagogical student–
professor relation, such as the promotion of a good relationship between
teachers and their students or teaching staff commitment to and
support for students; the resources available to teaching and learning;
the way teaching and learning are organised within the institution, with
a special emphasis on the existence of autonomy in the management
of curricular unit programmes; and an emphasis on vocational training
in study programmes, namely in terms of internships.
The promotion of research quality and dissemination at the inter-
national level, the establishment of a strong link between teaching
and research, the integration of students in research activities and the
availability of resources, in terms of time and money, to research, are
all factors considered to promote quality that relate overall to the HEI’s
research mission.

Institution’s actors
The last group of factors that promote quality has to do with the institu-
tion’s actors (especially academic staff and students) and their relation-
ship. Relevance is given to the quality of the institution’s own actors
(academics, students and non-academic staff) and their interaction,
namely in terms of good performance, professionalism, involvement
and communication among all and a willingness to rise to new chal-
lenges. The high quality of incoming students or the actual students’
skills and competences are also referred to as factors that promote
quality.
Finally, reference is made to academics’ high qualifications, skills and
competences, as well as to their individual effort to improve them and
self-motivate; to academics’ teaching performance; and to academics’
The Academic Constituency 199

scientific and research performance, including their perception that


teaching duties should be accompanied with up to date, high quality
research.
From the data collected and analysed we can conclude that quality
at the basic unit level is mainly promoted through actors when they
are performing their daily activities within academia: teaching and
learning and research. Obviously their work in promoting quality can
be enhanced if the HEI is capable of implementing an institutional
dynamic that supports actors’ efforts towards quality. But overall one
can say that for Portuguese academics quality lies at the heart of the aca-
demic endeavour. This is obviously in line with their perceptions that
the relevant purposes of quality assessment should be the improvement
of teaching and learning processes, the development of their own skills
and competences and the promotion of a better link between teaching
and research.

Concluding remarks

Recent trends in quality assurance seem to be evolving in two oppo-


site directions: on the one hand, there are an increasing number of
national systems based on accreditation and other more intrusive forms
of quality assurance (Schwarz and Westerheijden, 2004) and a move to
ranking systems (Van Vught, 2009; Kaiser and Jongbloed, 2010); and
on the other hand, there is a movement in some countries towards
quality enhancement as a way to reinstate trust in institutions. Quality
enhancement repatriates the responsibility for the quality of learning
processes to the institution, with external vigilance relying on institu-
tional audits rather than on more intrusive forms of quality assessment,
such as programme level accreditation (Higher Education Academy,
2008; Filippakou and Tapper, 2008).
In this chapter we have argued that one obstacle to an adequate
implementation of quality assurance in higher education is a lack of
academic support. This led us to try to explore academics’ perceptions
regarding quality assessment possible goals and purposes. Overall we
have come to the conclusion that, at least in Portugal, academics tend
to support the different types of goals and purposes of quality assess-
ment, although preferences emerge when the idea of quality assessment
is translated into a contribution to quality improvement, namely of
HEIs and their core functions and of academics’ skills and competences.
Portuguese academics prefer a formative type of quality assessment;
that is, one that promotes self-reflection and knowledge and the
200 Quality Assurance in Higher Education

collective identification of institutional strengths and weaknesses. They


also favour external accountability and the provision of information
to students and institutional governance bodies. They also agree with
the need to make the quality level of the higher education system, its
institutions and study programmes more transparent, both to society
and HEIs. Furthermore, academics tend to consider quality assessment
to be a contributor to innovation in academic practices, teaching and
learning methodologies, and in the establishment of new ways of
linking teaching, research and management. However, they are less
supportive when it comes to the link between quality assessment results
and penalty/reward mechanisms, resource allocation or control of the
higher education network. They tend to see the control purpose of
quality assessment as endangering academic autonomy and potentially
hindering innovation.
Another conclusion of our study is that the characteristics of different
academics, such as gender, type of institution, disciplinary affiliation
and having or having not performed quality assurance activities, influ-
ences their perceptions of quality assessment – more support seems
to come from female academics, from polytechnic institutes, from
Agricultural Sciences, Humanities and Medical and Health Sciences, and
from those who have already had some experience in quality assurance
activities.
Based on our results it seems wise to conclude that Portuguese aca-
demics show a preference for the characteristics of quality enhance-
ment. This is corroborated by other studies (Laughton, 2003).
Nevertheless, many European systems seem to be going in the opposite
direction, with an increasing number of national quality assessment
systems being based on accreditation and rankings. As pointed out
elsewhere:

since academics tend not to support quality assessment systems with


a focus on the control purpose, and because their support is essential
for their adequate implementation (Laughton, 2003; Newton, 2000),
it is paramount that governments and agencies responsible for the
design and implementation of quality assurance systems be careful
in the way they conduct their work so they get the collaboration of
academics. (Rosa et al., 2012, p. 364)

Another effort that should be made by those responsible for the defini-
tion and development of quality assurance systems (both at system and
The Academic Constituency 201

institutional levels) lies in improving the perceptions of certain groups


of academics. According to Cardoso and colleagues (2013, p. 110), ‘This
could help preventing less supportive, or even more resistant, attitudes,
contributing to increase academics engagement with QA.’

Acknowledgements
This research was supported by a grant from FCT – Fundação para a Ciência
e Tecnologia – under the framework of the project Perceptions of Higher
Education Institutions and Academics to Assessment and Accreditation (PTDC/
ESC/68884/2006). The author would like to thank all members of the project
team for their contribution to the results presented in this chapter, namely
Alberto Amaral, Amélia Veiga, Cláudia S. Sarrico, Cristina Sousa Santos, Diana
Dias and Sónia Cardoso.

Appendix – Different academics’ characteristics: do they


contribute to different perspectives?
Tables A.1 and A.2 present the variables under analysis for which statistically
significant differences have been detected among different groups of academics
(results from t-tests and Kurskall-Wallis tests).

Table A.1 Statistically significant differences identified between different groups


of respondents regarding higher education quality assessment goals

Goals Gender Type of institution Scientific area

The continuous development


of higher education quality
Accountability towards society
of higher education quality
The development of   
innovations generated within
higher education institutions
The higher education system’s  
adequacy to European rules

 – statistically significant for a 0.05 significant level;  – statistically significant for a 0.01
significant level).
202

Table A.2 Statistically significant differences identified between different groups


of respondents regarding higher education quality assessment goals

Purposes Gender Type of Scientific Experience


institution area in QA
activities

Allow for the closure of  


study programmes that have
no quality, based on their
non-accreditation
Favour the development of 
academics’ individual skills
Allow the government to 
allocate resources to institutions
based on assessment results
Facilitate the adoption of new  
methodologies for teaching and
learning
Provide the State with  
instruments for controlling the
higher education network
Promote the existence of 
academics’ performance control
mechanisms
Promote the improvement of 
student support systems
Allow institutions’ management 
and governance bodies to
promote policies for the
development of new teaching
and learning practices
Allow institutions’ management  
and governance bodies to
promote continuous quality
improvement processes for
teaching/learning
Increase academics’ involvement  
in teaching and learning issues
Have effects on the criteria 
defined by HEI for non-
academic staff recruitment and
promotion
Allow governance bodies to  
have information on HEI quality
so they can take decisions

(continued)
The Academic Constituency 203

Table A.2 Continued

Purposes Gender Type of Scientific Experience


institution area in QA
activities

Promote cooperation between  


academic and non-academic
staff
Reward academics’ innovative   
practices
Encourage institutions to be 
concerned with their reputation
or social image
Improve the links between  
teaching and research
Increase students’ involvement   
in teaching and learning issues
Provide information about the 
institution to an external entity,
for accreditation purposes
Contribute to the collective  
and shared identification of
an institution’s strengths and
weaknesses
Contribute to the definition of   
new routines and procedures
Promote the creation of quality  
assurance internal systems
Contribute to convergence   
among teaching, research
and management processes´
practices
Allow governance bodies to  
allocate resources based on
quality assessment results

 – statistically significant for a 0.05 significant level;  – statistically significant for a 0.01
significant level).

Notes
1. This chapter is based on work conducted under the research project Perceptions
of Higher Education Institutions and Academics on Assessment and Accreditation
(PTDC/ESC/68884/2006). Part of the material used was already published in
Cardoso, Rosa and Santos (2013) and Rosa, Sarrico and Amaral (2012).
204 Quality Assurance in Higher Education

2. Although this chapter’s empirical section addresses Portuguese academ-


ics’ perceptions of quality assessment, we often resort to studies on quality
assurance both to theoretically and empirically support our analysis. Quality
assurance frequently encapsulates quality assessment as one of its phases, and
thus studies on quality assurance are relevant to our analysis.

References
Amaral, A., Rosa, M.J. and Tavares, D.A. (2009) ‘Supra-national Accreditation,
Trust and Institutional Autonomy: Contrasting Developments of Accreditation
in Europe and the United States’, Higher Education Management and Policy,
21(3), 23–40.
Amaral, A. and Rosa, M.J. (2011) ‘Trans-national Accountability Initiatives: The
Case of the EUA Audits’, in B. Stensaker and L. Harvey (eds) Accountability
in Higher Education: Global Perspectives on Trust and Power (United Kingdom:
Routledge), pp. 203–20.
Amaral, A., Rosa, M.J. and Fonseca, M. (2013) ‘The Portuguese Case: Can
Institutions Move to Quality Enhancement?’, in R. Land and G. Gordon (eds)
Enhancing Quality in Higher Education: International Perspectives (London and
New York: Routledge), pp. 141–52.
Becher, T. and Trowler, P. (2001) Academic Tribes and Territories: Intellectual Enquiry
and the Cultures of Disciplines (Buckingham: Society for Research into Higher
Education and Open University Press).
Bell, E. and Taylor, S. (2005) ‘Joining the Club: The Ideology of Quality and
Business School Badging’, Studies in Higher Education, 30(3), 239–55.
Cardoso, S., Rosa, M.J. and Santos, C.S. (2013) ‘Different Academics’ Characteristics,
Different Perceptions on Quality Assessment?’, Quality Assurance in Education,
21(1), 96–117.
Cartwright, M. (2007) ‘The Rhetoric and Reality of “Quality” in Higher
Education – an Investigation into Staff Perceptions of Quality in Post-1992
Universities’, Quality Assurance in Education, 15(3), 287–301.
Clark, B. (1983) The Higher Education System: Academic Organisation in Cross-
national Perspective (Berkeley, CA: University of California Press).
Dill, D., Teixeira, P., Jongbloed, B. and Amaral, A. (2004) ‘Conclusion’, in
P. Teixeira, B. Jongbloed, D. Dill and A. Amaral (eds) Markets in Higher Education:
Rhetoric or Reality? (Dordrecht: Kluwer Academic Publishers), pp. 327–52.
Filippakou, O. and Tapper, T. (2008) ‘Quality Assurance and Quality Enhancement
in Higher Education: Contested Territories?’, Higher Education Quarterly, 62(1–2),
84–100.
GPEARI/MCTES (2010) Inquérito ao Registo Biográfico de Docentes do Ensino Superior
(REBIDES), www.pordata.pt (accessed 4 January 2011).
Halsey, A.H. (1992) Decline of Donnish Dominion: The British Academic Professions
in the Twentieth Century (Oxford: Claredon Press).
Harvey, L. (2006) ‘Impact of Quality Assurance: Overview of a Discussion
between Representatives of External Quality Assurance Agencies’, Quality in
Higher Education, 12(3), 287–90.
Harvey, L. (2009) A Critical Analysis of Quality Culture, http://www.inqaahe.org/
admin/files/assets/subsites/1/documenten/1241773373_16-harvey-a-critical-
analysis-of-qualityculture.pdf (accessed September 2010).
The Academic Constituency 205

Higher Education Academy (2008) Quality Enhancement and Assurance:


A Changing Picture? (York: Higher Education Academy).
Huusko, M. and Ursin, J. (2010) ‘Why (Not) Assess? Views from the Academic
Departments of Finnish Universities’, Assessment and Evaluation in Higher
Education, 35(7), 859–69.
Kaiser, F. and Jongbloed, B. (2010) ‘New transparency instruments for European
higher education: The U-Map and the U-Multirank projects’, paper presented
to the 2010 ENID Conference, 8–11 September 2010.
Kleijnen, J., Dolmans, D., Willems, J. and van Hout, H. (2011) ‘Does Internal
Quality Management Contribute to More Control or to Improvement of
Higher Education? A Survey on Faculty’s Perceptions’, Quality Assurance in
Education, 19(2), 141–55.
Langfeldt, L., Stensaker, B., Harvey, L., Huisman, J. and Westerheijden,
D. (2010) ‘The Role of Peer Review in Norwegian Quality Assurance: Potential
Consequences for Excellence and Diversity’, Higher Education, 59(4), 391–405.
Laughton, D. (2003) ‘Why Was the QAA Approach to Teaching Quality
Assessment Rejected by Academics in UK HE?’, Assessment and Evaluation in
Higher Education, 28(3), 309–21.
Lomas, L. (2007) ‘Zen, Motorcycle Maintenance and Quality in Higher Education’,
Quality Assurance in Education, 15(4), 402–12.
Luke, C. (1997) ‘Quality Assurance and Women in Higher Education’, Higher
Education, 33(4), 433–51.
Morley, L. (2005) ‘Opportunity or Exploitation? Women and Quality Assurance
in Higher Education’, Gender and Education, 17(4), 411–29.
Nasser, F. and Fresko, B. (2002) ‘Faculty Views of Student Evaluation of College
Teaching, Assessment and Evaluation in Higher Education, 27(2), 188–98.
Newton, J. (2000) ‘Feeding the Beast or Improving Quality? Academics’
Perceptions of Quality Assurance and Quality Monitoring’, Quality in Higher
Education, 6(2), 153–62.
Newton, J. (2002) ‘Views from Below: Academics Coping with Quality’, Quality
in Higher Education, 8(1), 39–61.
Newton, J. (2010) ‘A Tale of Two “Qualitys”: Reflections on the Quality
Revolution in Higher Education’, Quality in Higher Education, 16(1), 51–53.
Osborne, D. and Gaebler, T. (2002) Re-inventing Government: How the Entrepreneurial
Spirit Is Transforming the Government (Reading, MA: Addison-Wesley).
Papadimitriou, A., Ursin, J., Westerheijden, D. and Välimaa, J. (2008) ‘Views on
Excellence in Finnish and Greek Universities’, paper presented at the 23rd
CHER Conference, Pavia, 11–13 September.
Rosa, M.J. and Amaral, A. (2012) ‘Is There a Bridge Between Quality and Quality
Assurance?’, in B. Stensaker, J. Välimaa and C.S. Sarrico (eds) Managing Reform
in Universities: The Dynamics of Culture, Identity and Organisational Change
(Basingstoke: Palgrave), pp. 114–34.
Rosa, M.J. and Sarrico, C.S. (2012) ‘Quality, Evaluation and Accreditation: From
Steering, Through Compliance, on to Enhancement and Innovation?’, in
G. Neave and A. Amaral (eds) Higher Education in Portugal 1974–2009: A Nation,
a Generation (Dordrecht: Springer), pp. 249–64.
Rosa, M.J., Sarrico, C.S. and Amaral, A. (2012) ‘Academics’ Perceptions on the
Purposes of Quality Assessment’, Quality in Higher Education, 18(3), 349–66.
Rosa, M.J., Tavares, D. and Amaral, A. (2006) ‘Institutional Consequences of
Quality Assessment’, Quality in Higher Education, 12(2), 145–59.
206 Quality Assurance in Higher Education

Schwarz, S. and Westerheijden, D. (eds) (2004) Accreditation and Evaluation in the


European Higher Education Area (Dordrecht: Kluwer Academic Press).
Smith, R.L. (2000) ‘When Competition Is Not Enough: Consumer Protection’,
Australian Economic Papers, 39(4), 408–25.
Stensaker, B. (2008) ‘Outcomes of Quality Assurance: A Discussion of Knowledge,
Methodology and Validity’, Quality in Higher Education, 4(1), 3–13.
Stensaker, B., Langfeldt, L., Harvey, L., Huisman, J. and Westerheijden, D. (2011)
‘An In-Depth Study on the Impact of External Quality Assurance’, Assessment
and Evaluation in Higher Education, 36(4), 465–78.
Trow, M. (1996) ‘Trust, Markets and Accountability in Higher Education:
A Comparative Perspective’, Higher Education Policy, 9 (4), 309–24.
Van Vught, F. (2009) Mapping the Higher Education Landscape (Dordrecht:
Springer).
Veiga, A., Rosa, M.J., Tavares, D. and Amaral, A. (2013) ‘Why Is It Difficult to
Grasp the Impacts of the Portuguese Quality Assurance System?’, European
Journal of Education, 48(3), 454–70.
Watty, K. (2006) ‘Want to Know About Quality in Higher Education? Ask an
academic’, Quality in Higher Education, 12(3), 291–301.
Westerheijden, D., Hulpiau, V. and Waeytens, K. (2007) ‘From Design and
Implementation to Impact of Quality Assurance: An Overview of Some Studies
into What Impacts Improvement’, Tertiary Education and Management, 13(4),
295–312.
13
Students’ Views on the Recent
Developments in Quality
Assurance of Higher Education in
Europe
Liliya Ivanova

Student represention in Europe

The European Students’ Union (ESU) is the umbrella organisation of


47 national unions of students (NUS) from 39 European countries.
In this capacity, ESU promotes and represents the educational, social,
economic and cultural interests of more than 11 million students to all
key European decision-making bodies.
As an organisation, ESU was founded in 1982 by seven NUS and at
that point was called WESIB, the West European Student Information
Bureau. The political changes in Eastern Europe at the end of the 1980s
also affected WESIB, as it opened itself up to NUS from the former
east. Thus in February 1990, the WESIB become the European Student
Information Bureau (ESIB). As the European Communities started to
gain more influence in higher education in Europe, and certainly with
the start of the Bologna Process, the objective changed from being
just an information sharing organisation into becoming a political
organisation that represents the views and interests of students. In May
2007 it was decided that ESIB needed to change its name as the acronym
no longer represented the work of the organisation, and its name was
changed to the European Students‘ Union (ESU).
The Bologna Process proclaimed that students are competent and
constructive partners in creating the higher education experience.
Student participation in quality assurance (QA) has been one of the
key issues in the Bologna Process, and is perhaps one of the more suc-
cessful stories of student participation in general, although there is still
significant room for improvement. ESU has been actively involved in
advocating student participation in QA processes and has also provided

207
208 Quality Assurance in Higher Education

expertise, including the work of internal, external, accreditation councils


and quality assurance agencies.
When it comes to ESU’s involvement in QA, several important
initiatives should be mentioned in particular. As a member of the
E4 group ESU was involved in the design and promotion of the
European Standards and Guidelines for Quality Assurance (ESG) and
remains involved in their ongoing revision. Another important contri-
bution of ESU is in the revision of UNESCO/OECD Guidelines for Quality
Provision in Cross-border Higher Education (2005). ESU also partners
with the European University Association (EUA) and the European
Association for Quality Assurance in Higher Education (ENQA) in nomi-
nating students for EUA’s Institutional Evaluation Programme (IEP)
reviews and ENQA’s reviews. Over the years ESU has also developed
good cooperation with various national agencies and has nominated
students as review panel members for the institutional evaluations
organised by these agencies. In order to raise the capacity and hone
the skills of those students participating in various evaluations on both
national and international levels, ESU has conducted regular training,
through which ESU’s QA pool of students was created and developed.

Quality assurance as an action line in the Bologna


Process and ESU’s involvement in QA

An important step towards achieving common trust among govern-


ments, higher education institutions and students in the context of a
shared understanding of what European quality is, was the creation of
the Standards and Guidelines for Higher Education in the European
Higher Education Area (ENQA, 2005), known as ESG, and the crea-
tion of the European Quality Assurance Register for Higher Education
(EQAR).
Since 2003, ESU has prepared specific progress reports on the Bologna
Process from the perspective of students called Bologna with Student Eyes
(ESU, 2005, 2007, 2009, 2012), in which, among other topics, there
is an analysis of the effects of the developments in QA as an action
line. Looking into the genesis of the processes of student involvement
outlined in the last few Bologna with Student Eyes publications it can be
stated that there has been a considerable change with regard to the stu-
dent participation in QA on the European level following the adoption
of the ESG and the setting up of the EQAR. However, results from those
editions published in 2005, 2007, 2009 and 2012 show that there is still
long way to go, both at national and institutional levels. These reports
Recent Quality Assurance Trends in Students’ Views 209

and other publications also show that even if student involvement is


in place formally, this does not necessarily mean that it is active and
genuine (Galán Palomares, 2012).
The main conclusion from Bologna with Student Eyes 2005 is that in
internal QA, students were generally asked about their opinions, but
the thoroughness of student involvement was very different and often
did not lead to concrete changes (ESU, 2005). The NUS reported that in
only a few countries were the students involved in QA at all levels, and
these were mostly Nordic countries. They claimed that there should be
more of a focus on setting up QA systems at all levels, using transparent
procedures, publishing results, allocating more resources to external
reviews and providing more public justification in QA to build trust.
In 2007, the NUS again reported that students were still not fully
involved in QA at all levels or in all processes, and that there was a big
gap to fill. Internal QA mechanisms were reported to have not been
set up in all institutions. The responsibility for this was usually left to
the institutions. After the Bergen Communiqué was adopted, the NUS
stated that the ESG had been an important step, but they also stated
their concern that the programme-level external QA might be replaced
by institutional level external QA: ‘[National students’ unions] are pri-
marily concerned that from the students’ point of view, the quality of
single study programmes is much more crucial than the quality of the
institution as a whole’ (ESU, 2007, p. 16). In this regard, the setting up of
EQAR in March 2008 was welcomed by European students as a positive
step towards raising trust in QA processes and increasing transparency.
The main conclusion of Bologna with Student Eyes 2009 (ESU, 2009) was
that there was an obvious correlation between proper implementation
of the ESG and the increasing level of student participation in QA.
The national students’ unions reported that they had a good level of
awareness of the ESG. Some NUS reported having the full support of
the national and institutional authorities for implementing the ESG in
terms of broadening student participation in QA, while others claimed
to have no such support due to weaknesses in the ESG. According to
those unions, there was a need to apply the ESG at the national level;
alternatively it would be necessary to formulate national standards and
guidelines, compatible with the ESG. Twenty-five per cent of respon-
dents to Bologna with Student Eyes 2012 stated that national authorities
were still not applying the ESG; 47 per cent of respondents said that
higher education institutions were not applying the ESG (ESU, 2012).
The official Bologna stocktaking reports confirmed the student
perspective explained above. The 2005 report showed that almost all
210 Quality Assurance in Higher Education

countries had a QA system in place, or were in the process of setting


one up. However, student participation was the element most often
missing from the recommendations for a QA system stated in the Berlin
Communiqué.
In 2007, the stocktaking report indicated that almost all countries
had a QA system in place, which was in accordance with the Berlin
Communiqué’s objectives for higher education in most countries. In
many countries some progress had been made regarding student par-
ticipation. The level of international participation and cooperation in
QA had also improved.
The 2009 stocktaking report showed that all countries had introduced
QA agencies, but many had failed to set a date for their assessment
(Rauhvargers et al., 2009). The same report revealed that there was still
a need for students to be more involved in QA, not just as observers in
reviews, but also in institutional self-evaluations and in follow-up and
implementation procedures, so closing the QA loop.
When compared to the data from 2009, an overview of student
participation in QA in 2012 shows some improvement. A high number
of national students’ unions consider that in their countries students
are equal partners or that the level of student participation in QA is high
enough. However, some students’ unions still state that students do not
participate or that their participation is very limited.
The level of student involvement is best at the external evaluation
level. According to the unions’ perception, in almost half of the
countries students are highly involved in most higher education
institutions, although in not all of them are students considered as

No independent QA body/agency No participation


Very little participation Some participation, but far from being enough
Participation high, but still lacking in some places Equal partners
No answer

Internal QA 0% 8% 14% 19% 22% 24% 14%

External evaluations 11% 5% 3% 16% 14% 38% 14%

Accreditation/Audit procedures 0% 22% 5% 5% 11% 32% 24%

Agency governance 11% 3% 11% 16% 16% 24% 19%

0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%

Figure 13.1 Overview of student participation in quality assurance processes


(ESU, 2012)
Recent Quality Assurance Trends in Students’ Views 211

equal partners (ESU, 2012; see Figure 13.1). Comparing the data from
2009 and 2012 shows that the number of countries with no student
participation or with very little participation has decreased among
the reported countries; on the other hand the number of countries
where students are equal partners in quality assurance has increased
significantly (ESU, 2007, 2009).
More than seven years have passed since ministers adopted the ESG
in Bergen (Bergen Communiqué, 2005). Bologna with Student Eyes 2012
testifies further to the improvement of the level of awareness about
the ESG among NUS. The number of respondents said to know about
the ESG in detail is up from 33 per cent in 2007 to 63 per cent in 2009
and 77 per cent in 2012 (ESU, 2007, 2009). All of the unions declare
that they support the ESG in general; nevertheless the ESU MAP-ESG
consultation report (ESU, 2012b) concluded that several improvements
must take place, including the phrasing and the meaning of QA termi-
nology used in ESG, which leaves room for ambiguous interpretation
(ESU, 2012b).

Transparency tools and information provision

One important aspect with regards to quality assurance is its function


as a transparency tool for making certain aspects of quality assurance
visible and understandable. As higher education institutions gained
more and more autonomy, the concept of ‘value for (public) money’
jumped into the picture; as a result governments and tax-players started
to demand more transparency and accountability from HEIs, as well as
a quality education.
Transparency has been a goal of the Bologna Process since the very
beginning, with an emphasis on comparability and compatibility
of systems and degrees (Bologna Declaration, 1999; ESU, 2010). The
transparency tools officially entered into Bologna Process documents in
Leuven were the result of a longer movement towards the establishment
of this action line. Since Leuven (Leuven Communiqué, 2009) and the
proliferation of the transparency tools, discussion about those tools has
broadened to include the impact of global and national rankings and
new classification tools. Thus multiple understandings of transparency
tools have emerged (ESU, 2010), and these developments are in part
more relevant to the Bologna Process than before.
Both the modernisation agenda and the communication of the
European Commission on the modernisation of universities and
higher education (European Commission, 2006) pushed for financial
212 Quality Assurance in Higher Education

support for performance-based-indicator projects. These aimed to allow


comparison of higher education institutions and programmes. Examples
include the U-Map and U-Multirank classification and mapping of
the Center for Higher Education Policy Studies (CHEPS), OECD’s
Assessment of Higher Education Learning Outcomes (AHELO) project
and the allocation of European funds for the assessment of university-
based research.
Looking at the outcomes of the last Bologna with Student Eyes survey
(ESU, 2012a; see Figure 13.2) it is clear that NUS are positive about
the building of tools for information provision on the national level.
National unions have a positive view towards tools that are perhaps
more descriptive in nature and that do not seek to create hierarchies.
Most notable examples are databases that show or list courses, qua-
lity assurance reports and outcomes and accredited institutions or

NUS being strongly against NUS being quite concerned


NUS generally supports with some concerns NUS strongly supports

Databases about accredited programmes and institutions 1 8 23

Databases about quality assurance outcomes 2 6 24

Databases about courses and programmes 1 8 22

Databases and webpages about funding


2 4 25
options for students

Europewide database on quality assurance outcomes 5 12 14

Europewide database on courses and programmes 3 11 14

Europewide databases about accredited


6 11 12
programmes and institutions
Assessing higher education learning
2 3 12 10
outcomes (AHELO)project

National classifications of HEI or faculty or programme 5 6 7 14

Europe-wide admissions system 10 12 6

U-Map classification 4 8 10 7

U-Multirank ranking 4 11 12 3

National rankings and league tables 11 9 4 8

Other europewide league tables and rankings 10 10 4 7

0 5 10 15 20 25 30 35

Figure 13.2 Support of the national students’ unions for national and European
transparency tools (Bologna with Student Eyes, 2012)
Recent Quality Assurance Trends in Students’ Views 213

programmes, and that provide student-funding information. The NUS


show a certain scepticism towards European tools in general, however.
The exact reason for this could be that these tools are more difficult to
build due to differences between national systems (ESU, 2012a).
There are controversial issues among unions regarding European or
national tools, especially rankings and classification tools. The detri-
mental effects of some of these tools have been explored in many
reports and such arguments are often used to explain the negative view
of rankings. On the other hand, when it comes to comparability on an
international scale, some unions are generally positive towards AHELO’s
idea of assessing learning outcomes, although the project was at first
criticisised for its methodology and its aspirations to rank institutions
and/or countries (similar to PISA). Furthermore, although unions rather
strongly support European-wide databases on courses, the idea of a
common admissions portal, such as has been put forward at the discus-
sions about EHEA mobility strategy, is not considered convincing by
half of the unions. A few have pointed to the incompatibility of systems
as much as anything else as a reason for this (ESU, 2012a).

ESU’s ‘QUEST for quality for students’ project as tool


for defining students’ views on quality and creating
student-centred information provision models

Despite many efforts and the recognition of the role students have in
QA in Europe, in practice students are not often asked to present their
views on higher education quality. Sometimes this is because students
are not able to articulate their views due to a lack of basic information
about their study programme, expected learning outcomes or existing
QA mechanisms, let alone about future QA developments. That is why
ESU has launched its QUEST project, which through research activi-
ties aims to identify students’ views on quality. The project focuses on
exploring the essential concern of students in Europe about the qua-
lity of education and will provide information and means for students
themselves to influence quality enhancement and assurance.
Using a survey, the project looks at what students perceive as important
aspects of quality and what they see as effective ways of bringing this
information to them. The survey was preceded by desk research and
national site visits to different countries to look for good and interesting
practice examples of students at the centre of quality enhancement
and assurance. The project seeks to identify what information students
think it is important they are given by HEIs. QUEST will therefore help
214 Quality Assurance in Higher Education

to create student-centred models of meaningful information provision


according to those identified needs. Furthermore, it aims to compare
and examine how the findings are complemented by students’ views
of already existing and implicit transparency tools like QA reports or
the European Standards and Guidelines in Quality Assurance (ESG) as
well as various information databases. Another goal of the project is to
identify what the strengths and weaknesses of these tools are, and how
QA mechanisms should be developed so that the quality as defined
through student’s eyes could be reached and assured. The final stage of
the research will define the student quality concept that would provide
a framework for student-centred quality enhancement, assurance and
information provision models.
QUEST is currently an ongoing project and the research data from it
will be soon available.

Conclusions

Quality assurance mechanisms have been in constant development in


recent years. The involvement of students has improved at all levels
since 2005, but there is still room for improvement. Degrees of involve-
ment at the institutional level, the external evaluation level and the
agency governance level remain considerably different.
While new approaches in QA have begun to be used, there are no
automatic guarantees or proof that such mechanisms will lead directly
to a higher quality of higher education. Therefore, the principles of fit-
ness for purpose and genuine involvement of all stakeholders should
be applied and developed within the common framework of the ESG.
According to NUS the classifications of higher education institutions
and the linking of quality assurance outcomes directly to funding deci-
sions may become a double-edged sword due to how resources will be
reallocated (ESU, 2012). Although these reforms aim to strengthen the
different institutions their full effects are not yet completely known. It is
for this reason that some critics point out that instead of increasing the
efficiency of higher education funding, these reforms may jeopardise
the quality of some institutions more than others.

References
Bergen Communiqué (2005) The European Higher Education Area: Achieving the
Goals. http://www.ond.vlaanderen.be/hogeronderwijs/bologna/documents/
MDC/050520_Bergen_Communique1.pdf (accessed 22 September 2013).
Recent Quality Assurance Trends in Students’ Views 215

Bologna Declaration (1999) http://ec.europa.eu/education/policies/educ/


bologna/bologna.pdf (accessed 22 September 2013).
ESU (2005) Bologna with Student Eyes 2005.
ESU (2007) Bologna with Student Eyes 2007.
ESU (2009) Bologna with Student Eyes 2009.
ESU (2010) Bologna at the Finish Line.
ESU (2012a) Bologna with Student Eyes 2012.
ESU (2012b) ESU Consultation Report of the MAP-ESG Project.
European Commission (2009) Report on Progress in Quality Assurance in Higher
Education.
Galan Palomares, F.M. (2012) ‘Consequences of student participation in Quality
Assurance: Why should there be students involved?’, in A. Curaj A. (ed.),
European Higher Education at the Crossroads: Between the Bologna Process and the
National Reforms. Part 1 (Dordrecht: Springer), pp. 361–73.
Leuven Communiqué (2009) The Bologna Process 2020: The European
Higher Education Area in the New Decade. http://www.ond.vlaanderen.be/
hogeronderwijs/bologna/conference/documents/Leuven_Louvain-la-Neuve_
Communiqué_April_2009.pdf (accessed 22 September 2013).
Rauhvargers, A., Deane, C. and Pauwels, W. (2009) Bologna Process Stocktaking
Report 2009. http://www.ehea.info/Uploads/Documents/Stocktaking_report_
2009_FINAL.pdf (accessed 22 September 2013).
14
Recent Trends in Quality
Assurance? Observations from the
Agencies’ Perspective
Achim Hopbach

Introduction

Quality assurance in European higher education has reached the top of


the political and the higher education agendas. However, it is just one
item on a long list of reform agendas in higher education, such as fund-
ing and management of higher education institutions, and not least the
reforms in teaching and learning in the frame of and initiated by the
Bologna Process. Without a doubt, over the last 20 years the emergence
of national quality assurance regimes has been a common feature of
European higher education, a development that even gained momen-
tum when quality assurance turned out to be one pillar of setting up a
European Higher Education Area (EHEA) through the Bologna Process.
The report on the evaluation of the Bologna Process, submitted to
ministers when they officially launched the EHEA in 2010, highlighted
that in almost all countries quality assurance systems and procedures
have been implemented in accordance with the Standards and Guidelines
for Quality Assurance in the European Higher Education Area (ESG) adopted
in 2005. It was pointed out that this should be seen as a major success
for the Bologna Process (Westerheijden et al., 2010, p. 30). The relevance
and the consequences of this development shouldn’t be underestimated.
EUA’s 2010 ‘Trends Report’ revealed that quality assurance is considered
the most important recent policy change (Sursock and Smidt, 2010).
Taking into account the substantial changes in many countries regard-
ing legal frameworks, governance and funding of higher education,
this outcome gives evidence for the major role quality assurance plays
within the current reform process in European higher education.
This chapter will discuss whether in recent years developments
in the field of quality assurance have followed a certain trend. The

216
Observations from the Agencies’ Perspective 217

focus is at the procedural level, which means asking whether trends


in certain approaches to quality assurance can be observed. In addi-
tion, political discussions about quality assurance will be taken into
account. The following deliberations are based on the agencies’
perspective and are thus focused on external quality assurance and
the role of quality assurance agencies within national higher educa-
tion systems. The chapter tries to shed light on developments of the
past three to four years and focuses on developments in the EHEA
because the Bologna Process has been the most important driver of
developments in the field of quality assurance, and has turned out
to be the most important framework for policy-making in European
higher education.
In general, it is not easy to apply the concept of ‘trends’ to quality
assurance in the European Higher Education Area. This is for two rea-
sons in particular.
Firstly, quality assurance is still a rather young phenomenon in many
countries of the EHEA. It was only the adoption of the ESG in 2005
that gave momentum to the implementation of quality assurance on
a larger scale in most European countries. Although quality assurance
is not an invention of the Bologna Process, its history is relatively
short. The European Pilot Projects of the mid-90s can be considered
the starting point of the development of external quality assurance
systems in European higher education (Bernhard, 2012, p. 40). Hence
the task to observe or even analyse trends in the field of such a young,
and in many ways, still developing field is difficult because no ‘longue
durée’ approach can be applied. Secondly, one particular characteris-
tic of the European Higher Education System is its diversity in many
respects, not only with regard to higher education institutions and
their programmes. Despite the successful implementation of Bologna
tools such as the Three-Cycle-System, The Qualifications Framework
of the European Higher Education Area (QF-EHEA), The European
Credit and Transfer System (ECTS) and ESG, this diversity also applies
to many structures and processes. One has to take into account that
the decisions made at the ministerial meetings are not legally binding
and need to be implemented in different political and legal frameworks
at the national level, and in systems with different cultural traditions.
Looking for trends normally means trying to identify similarities at a
high level; in the case of the EHEA it is important to take into account
that the respective developments take place in diverse systems and
consequently might not look the same despite being very similar in
principle.
218 Quality Assurance in Higher Education

Hence, due to the short history of quality assurance, the deliberations


and findings of this article are based on participatory observations
rather than on empirical data.

Convergence through implementation of


ESG as underlying trend

One development should be considered less a trend and more as a


stable development. That is the implementation of the Standards and
Guidelines for Quality Assurance in the European Higher Education Area
(ESG). The adoption of the ESG by the ministers responsible for higher
education in 2005 and its subsequent implementation at the institu-
tional level as well as in agencies and at the national level after 2005 is
without doubt a major milestone in the development of quality assur-
ance in the EHEA.
The ESG were drafted by ENQA together with the other members of
the so-called ‘E4’ partners, EUA, EURASHE and ESU, and are ‘designed to
be applicable to all higher education institutions and quality assurance
agencies in Europe, irrespective of their structure, function and size, and
the national system in which they are located’ (ENQA, 2009a, p. 12) and
can, hence, be considered as the main reference point for the design of
quality assurance in the emerging EHEA, be it internal or external quality
assurance. Taking into account the diversity of the now 47 Bologna signa-
tory countries in terms of political, legal and cultural backgrounds, and
also the specific nature of the Bologna Process and the European Higher
Education Area working with the open method of coordination which is
based on self-binding (but not legally binding) political declarations, this
claim should not be underestimated. The ESG refer to this diversity and
state that a single monolithic approach to quality, standards and quality
assurance in higher education is therefore inappropriate.
However, through the widespread implementation of the ESG, quality
assurance converged to a certain extent by applying the main features
of the European approach to quality assurance. These are:

• HEIs bear the main responsibility for quality;


• The four-stage-model applies: internal evaluation, external evalua-
tion by peers, publication of reports, follow-up procedure;
• External quality assurance procedures should take into account the
effectiveness of the internal quality assurance processes; and
• Stakeholder – particularly student – involvement is critical in all
phases, and also in the development of quality assurance processes.
Observations from the Agencies’ Perspective 219

Although today these principles seem almost natural, it is noteworthy


that, for example, stakeholder involvement – and in particular student
involvement – in all steps of quality assurance procedures and at all
levels of the system was not natural to all countries. From an agency
perspective, the independence of agencies was also not self-explanatory,
and remains an issue in many countries.
In addition to convergence in structures and procedures, the imple-
mentation of the ESG led to convergence at a more fundamental level.
The fact that the ESG were developed by all relevant stakeholders not
only makes actors in the field of quality assurance share the same
values and principles, but also gives evidence of the fact that the stake-
holder model of quality (Harvey, 2006), which recognises the diverse
approaches of relevant stakeholders to the nature of quality in higher
education, had become dominant.
The last three to four years might be considered the last years of
implementing quality assurance in accordance with the ESG. During
these years, one trend emerged which will certainly be sustainable, and
that is the growing relevance of the learning outcomes approach in
external quality assurance procedures. It cannot be surprising that the
ESG don’t refer to the QF-EHEA and the concept of learning outcomes
and student centred learning. The ESG and the QF-EHEA were deve-
loped in parallel and the wider application of the learning outcomes
approach started only after the adoption of the ESG. Although the ways
in which quality assurance agencies take into account the spreading
learning outcomes approach in their procedures vary, most of them
have one thing in common: no significant methodological change was
introduced to the existing procedures. Instead, additional standards
or accreditation criteria were added, such as definition of appropriate
intended learning outcomes, application of appropriate teaching and
assessment methods, and so on. Common to these standards and criteria
is that the subject matter of the quality assurance processes, that is,
the programme or the institution, remained the same even though the
students moved more into focus.
However, the implementation of the ESG means a substantial change
for many agencies, not only in methodological terms. The ESG also
gained major relevance with regard to the position of agencies at the
national and European levels. As early as 2004 ENQA stipulated in
its statutes that compliance with ESG would be the core criterion for
full membership, and agencies would have to give evidence about the
appropriate application of the ESG in an external review. Consequently,
in 2006 ENQA started evaluating agencies against the ESG. Since 2009,
220 Quality Assurance in Higher Education

compliance with the ESG has also been a precondition for being listed
in the European Quality Assurance Register for Higher Education. For
many agencies this meant not only that they ‘had to take their own
medicine’, but also that they found themselves in a core position as
drivers to implement the ESG at the institutional level. By applying
ESG part II the agencies implicitly requested the higher education
institutions to set up internal quality assurance procedures or modify
existing procedures in accordance with ESG part I. Having quality assu-
rance agencies as drivers of implementing structural and procedural
reforms which the European representation of the HEI had drafted and
adopted themselves is obviously not a healthy approach, and agencies
were not happy about being given this role. Another external driver
for the implementation of the ESG was the stocktaking exercise of the
Bologna Follow-up Group in preparation for the ministerial conferences
in Leuven/Louvain-La Neuf 2009, and Bucharest 2012, when the state
of development in quality assurance, and mainly regarding ESG, were
on the list of terms for which countries aimed at ‘green traffic lights’
(Rauhvargers et al., 2009; Education, Audiovisual and Culture Executive
Agency, 2012).
In conclusion, the underlying trend in quality assurance in the EHEA
in recent years was the implementation of the ESG at the institutional
and agency levels. This means applying the stakeholder approach to
quality of higher education and implementing shared principles such
as primary responsibility of HEI for quality assurance; independence
of quality assurance agencies; stakeholder, and in particular, student
involvement; and orientation towards the enhancement of quality
assurance.
Identifying the application of the ESG as an underlying trend does
not automatically mean that trends for applying certain approaches to
quality assurance such as accreditation or audit, can also be observed
at the programme or institutional level. The ESG don’t provide political
decision makers at the national level with a blueprint for designing
quality assurance systems or procedures. On the contrary: the authors
of the ESG deliberately refused to define procedures although they were
asked to do so by the ministers. The restriction to general principles
without prescriptive procedural rules for the design of the procedures
in detail gave leeway to national authorities to design their respective
quality assurance regimes, and several ENQA surveys about the develop-
ments in external quality assurance in the last ten years clearly show
that governments make use of this leeway. All the well-known proce-
dures, such as programme and institutional accreditation, programme
Observations from the Agencies’ Perspective 221

and institutional evaluation, and quality audits, have been developed


since the 1990s and none of them have vanished, although evaluation
and accreditation at the programme level are by far the most wide-
spread. Interestingly, almost all agencies apply more than just one
approach to quality assurance on a regular basis (Costes et al., 2008,
p. 24). If one also takes into account that at the time of the ENQA
survey in 2008 three out of four agencies were about to change their
approach, it becomes clear that a situation of variety and dynamism
can be observed rather than a particular trend in one direction (Costes
et al., 2008, p. 27).

A trend towards audits?

Regarding possible trends at the procedural level, one topic will be


discussed in more detail because it both plays a major role in political
discussions about quality assurance and it seems to be built into the logic
of the ESG. This is the trend towards quality audits. This trend towards
quality audits, be it apparent or real, is often referred to as the move from
a control-oriented quality assurance approach, which would normally
be conducted at programme level and often by way of accreditation,
towards a rather development-oriented audit approach at the institutional
level, which does not focus on single programmes or on standards as
such but on the institutional capacity to assure and develop high qua-
lity. Ideally, this development might look as follows: higher education
institutions are faced with quality assurance as a new task, not necessa-
rily because significant quality problems in their educational provision
were detected or claimed to exist, but often rather because of growing
demands in the field of accountability and in particular the request to
give evidence about the quality of their teaching provision. Hence proce-
dures such as programme accreditation or similar are introduced in order
to ‘assure’ or ‘guarantee’ certain standards. A number of examples can be
found for this, for example the UK, Germany, the Netherlands and many
countries in Eastern and Southeastern Europe. After a couple of years
of experience and, consequently, growing professionalism in quality
assurance higher education institutions claim to have effective internal
quality assurance systems in place. As a consequence a ‘heavy’ external
quality assurance approach is no longer necessary and thus the focus can
be more on the effectiveness of the internal mechanisms. Another reason
for requesting such a move is, of course, the complaint that external
quality assurance approaches such as programme accreditation are too
burdensome (Hopbach, 2012, p. 281).
222 Quality Assurance in Higher Education

Even the ESG seem to give hints that such a development would be
almost natural. Its second part begins with the statement that ‘external
quality assurance procedures should take into account the effectiveness
of the internal quality assurance processes’. The guideline attached to
this standard points to the expected benefit for the higher education
institutions from such an approach:

The standards for internal quality assurance contained in Part I provide


a valuable basis for the external quality assessment process. It is
important that the institutions’ own internal policies and proce-
dures are carefully evaluated in the course of external procedures, to
determine the extent to which the standards are being met. If higher
education institutions are to be able to demonstrate the effectiveness
of their own internal quality assurance processes, and if those processes
properly assure quality and standards, then external processes might
be less intensive than otherwise. (ENQA, 2009a, p. 20)

This refers to the ‘unwritten deal’ (Kristensen, 2010, p. 154) between


external and internal quality assurance which indicates that the level
of responsiveness of higher education institutions towards stakeholders’
demands for accountability is inversely proportional to the level of
external scrutiny (Loukkola, 2012).
However, it is questionable whether this ‘trend’ is a reality or rather
a political request from the side of the higher education institutions.
It is true: Germany and the Netherlands have recently introduced
institutional audit-like processes into their national quality assurance
and accreditation systems. Both cases are interesting not least because
these countries were the first to introduce programme accreditation
in Western Europe.1 However, one can also observe developments in
the opposite direction: Denmark has only recently changed from an
audit-like approach to programme accreditation. Another interesting
case is England, where a large debate took place about focusing more on
academic standards than on the internal quality assurance mechanisms
as has been done in the institutional audits conducted by QAA, the
British external quality assurance agency (Brown, 2010); a White Paper
issued by the ministry responsible even discussed programme accredi-
tation approaches (Department of Business, innovations and Skills,
2011, pp. 66–73). QAA is now about to introduce a so called ‘risk-based’
approach to quality assurance in England that puts much less emphasis
on the developmental dimension of quality assurance. Sweden, a fore-
runner in terms of quality assurance in the 1990s, is also taking a differ-
ent route by focusing on the assessment of whether or not the intended
Observations from the Agencies’ Perspective 223

learning outcomes have been achieved by more or less leaving aside


internal quality assurance mechanisms in the review.2 If one recalls
developments in the UK since the early 90s, where there was a step by
step move from an inspection-like type of programme review towards
an institutional audit. The current development therefore seems sur-
prising. However, taking into account that Denmark is also about to
change its approach again, towards an institutional and more audit-like
approach, one could get the impression of a confusing situation instead
of a trend towards one direction. In conclusion, one can say that the
outcome of an ENQA analysis in 2008 is still valid: diversity and change
are common features of quality assurance in the EHEA.

The purpose of quality assurance: vague and arbitrary?

Based on the findings so far, one can conclude that, apart from the con-
vergence of quality assurance in the EHEA based on the implementation
of the ESG and a greater emphasis on learning outcomes, no clear trends
can be observed at the procedural level, at least not yet. In this chapter
we try to give two explanations as to why this ‘colourful picture’ of
quality assurance cannot be surprising, although there are good reasons
for expecting certain trends, as mentioned previously.
The first explanation is based on the well-known ‘European
Implementation Dilemma’ (Serrano-Velarde and Hopbach, 2007, p. 37).
For quality assurance the same is true as for the other reform procedures
within the frame of the Bologna Process: the ministerial agreements are
not legally binding and neither does the EHEA provide a common legal
framework. Hence all reforms have to be ‘translated’ into national and
institutional policy and procedures. This not only means that national
legal frameworks and national political agendas have to be taken into
account, but also and more fundamentally so too do national authority
and prerogative. The ESG clearly do so and state: ‘The standards . . . do
not attempt to provide detailed guidance about what should be exam-
ined or how quality assurance activities should be conducted. Those are
matters of national autonomy, although the exchange of information
amongst agencies and authorities is already leading to the emergence
of convergent elements’ (ENQA, 2009a, p. 14). Hence, national frame-
works were and are critical for applying ESG to external quality assur-
ance (ENQA, 2009b, p. 3) Experience indicates that being in accordance
with national quality assurance policies and the priorities of its main
actors is particularly key for the successful implementation of the ESG;
so too is whether they fit into the national legal setting and cultural
traditions.
224 Quality Assurance in Higher Education

At first glance, the second explanation might be considered surprising:


a common trend in quality assurance that would go beyond con-
vergence based on the application of the ESG and result even in a
standardised approach across the EHEA or even ‘the’ European quality
assurance system is highly unlikely to emerge because one fundamental
precondition is not fulfilled – a common understanding of the purpose
of quality assurance.
The question ‘What is the purpose of quality assurance?’ is not banal
at all, as a very recent attempt at a definition shows. In the current draft
for the revised ESG the authors state:

At the heart of all quality assurance activities are the twin purposes
of accountability and enhancement. Taken together, these create trust
in the higher education institution’s performance. Quality assurance
and quality enhancement are inter-related and describe a cycle that
allows an HEI to assure itself of the quality of its activities and to take
opportunities for continuous improvement. Such activities support
the development of a quality culture that is embraced by all: from the
students and academic staff to the institutional leadership and man-
agement. . . . The term ‘quality assurance’ is used in this document to
describe all activities within the continuous improvement cycle (that
is, assurance and enhancement activities).3

On one hand the authors refer to and confirm the well-known or


even ‘traditional’ twin purposes of enhancement and accountability.
Presumably these purposes of quality assurance are widely accepted. On
the other hand, the authors indicate that there might be different or
additional purposes as well.
Indeed, political discussions at the national and European levels in
recent years show that quality assurance procedures have had to accom-
modate a broadening array of purposes which might be considered a
trend of quality assurance itself. In an unpublished paper, the former
Director of the Australian quality assurance agency listed 20 purposes
external quality assurance agencies are asked to fulfil (Hopbach, 2012,
p. 269). The most influential discussions shall be referred to in the
following sections.

Quality assurance as transparency tool


Maybe the most influential recent political discussion in this respect
refers to the ‘transparency function’ of external quality assurance.
Observations from the Agencies’ Perspective 225

It is obvious that quality assurance has always served as a means for


providing students, higher education institutions and public authorities
with information about the quality of a certain programme or a certain
institution. However, in many cases the type of information that is
requested changes and so does the purpose of providing information.
Today, information on the quality of programmes and institutions
serves more and more for comparative purposes, rather than for descrip-
tions of single programmes or institutions. This has an impact on the
type and scope of the information. In order to enable the comparison
of programmes or institutions, one needs information which is easy
to compare. This often means quantitative data that can be generated
equally at all institutions. It is obvious that this type of information
differs substantially from the information normally provided by
external quality assurance procedures. Information gathered solely for
comparative purposes doesn’t cover the quality of the programme or
the institution, but is confined to certain aspects of the programme or
institution. The information is rather about performance of a certain
aspect such as completion rates or number of Nobel Prize-winners
among the staff.
Traditionally, quality assurance procedures result in more holistic
approaches to quality, which is obviously a complex issue and contains
not only quantitative measurements but also and foremost qualitative
evaluations. Because of this the request for providing the public with
such information is challenging external quality assurance. This chal-
lenge results from the fact that tools like rankings and classifications
serve the purpose of providing information for comparative purposes
much better than external quality assurance procedures. These tools,
and in particular rankings, gain popularity ‘because they are perceived
to provide independent information about the quality and performance
of higher education’ (Hazelkorn, 2009, p. 10). International surveys
indicate that ‘rankings have a profound impact and influence on insti-
tutional behaviour and academic decision-making’ (Hazelkorn, 2009,
p. 14). In public discussions a competition seemed to emerge between
quality assurance and rankings about who will provide the best infor-
mation of this kind. And this challenge might even be a threat because
the information provided by rankings seems to be so easy to digest,
different from the comprehensive and rather complex external quality
assurance reports. It is obvious that league tables are easy to understand
and quick to read, whereas lengthy audit reports require much more
time and contextualisation.
226 Quality Assurance in Higher Education

Hence it will be of crucial importance to make the similarities and the


differences between these tools transparent.

Transparency tools such as rankings and classifications and quality


assurance serve different purposes, although they support each other
to a certain extent. It is misleading to consider them as alterna-
tives. . . . Rankings and classifications, by putting performance of
institutions in relation to criteria, contribute to the accountability
function by informing the public, but they do not contribute to
quality enhancement, the second function of quality assurance.
Thus, rankings and classification tools are not to be seen as quality
assurance tools; in particular, they do not provide information about
potentials for the future. (Costes, 2011, p. 12)

Quality assurance as policy evaluation tool


When external quality assurance systems were introduced at the national
level in the 1990s, most of them were assigned a purpose at the system
level which is often forgotten: in particular, accreditation procedures
played a major role as a steering mechanism for higher education systems,
for example to regulate private higher education systems (Austria, Eastern
Europe). This feature can be found in particular in those higher education
systems that are characterised by the growing autonomy of institutions
and equally growing demands on accountability procedures (Van Vught,
1994; Trow, 1996). The regulatory function never vanished, and it has
even gained more attention in the last years when public authorities intro-
duced the success of national reform policies such as student mobility,
gender aspects, equal access to higher education, and so on, as additional
criteria for accrediting programmes or institutions. Although these items
are highly relevant they might be considered not directly linked to the
quality of programmes or institutions. In addition, the results of quality
assurance procedures are often used for purposes which are only indirectly
linked to the ‘traditional’ core purposes of external quality assurance; for
example, when classifications of higher education institutions are based
on quality assurance procedures. Another example is performance based
funding schemes. In conclusion, quality assurance is not only about qual-
ity in higher education. It has to be considered also as a steering instru-
ment for higher education (Serrano and Hopbach, 2007, p. 30).

Quality assurance as developmental tool


There can be no doubt that in many countries, (international) com-
petition between and diversity of higher education institutions have
Observations from the Agencies’ Perspective 227

become two of the major features and driving forces of national


higher education systems. Not least among these are performance-
based funding schemes, although competition for students and
teachers, and for third-party funds, are also the basis for the strate-
gies of many higher education institutions. Partly as a consequence
of the competitive higher education market and partly as a response
to diverse expectations from societies, an increasing diversification of
mission, profile and provision of teaching is taking place (see, among
others, Reichert, 2009). Both developments have a significant impact
on quality assurance. Quality of provision becomes a core success
factor for higher education institutions; hence quality is not only
the result of intrinsic motivation of teachers. If this is true, the role
and meaning of quality assurance also changes. Quality assurance
can’t be considered as something ‘extra’ next to the core processes
in higher education. Guaranteeing a certain quality or even enhanc-
ing the quality of a programme becomes an integral part of the
‘regular’ management of higher education institutions; quality assur-
ance processes get directly linked to other management processes.
Hence, higher education institutions expect that quality assurance,
especially external quality assurance, will enhance their provision and
help them reach their full potential. This emphasises the enhance-
ment function of quality assurance rather than the accountability
function.
From this brief snapshot of the most influential political discussions
about demands on quality assurance from different perspectives one
can conclude that the purpose of external quality assurance is either
vague or arbitrary, or at least that it is rather a range of purposes that
seems to be still growing. One has to take into account that today
several stakeholders, in particular political decision-makers, apply a sig-
nificantly different approach to the purpose of quality assurance which
is not at all confined to the ‘traditional’ twin purposes of accountability
and enhancing teaching quality. Combined with the high relevance of
national political and legal contexts it seems natural that no standar-
dised approach to quality assurance was developed in the EHEA. This is
particularly interesting because this discussion or these developments
encounter a situation in which the methodological foundation of
external quality assurance hasn’t changed substantially since the early
1990s, although the procedures were constantly revised and refined.
This obviously leads to the question whether the ‘traditional’ way of
conducting external quality assurance is appropriate to fulfilling the
wide array of purposes.
228 Quality Assurance in Higher Education

Conclusions: are we killing too many birds


with one stone?

Observation of the development of external quality assurance in the


European Higher Education Area in recent years and the related political
discussions allow us to draw the following picture.
The implementation of the ESG since 2005 led to a convergence of
external quality assurance procedures insofar as the shared principles
such as primary responsibility of higher education institutions, self-
evaluation, peer review, publication of reports, stakeholder involvement
and independence of agencies were applied to the various existing
approaches including evaluation and accreditation, as well as audits. In
one way or another, the twin-purposes of accountability and enhance-
ment formed the basis for the design of the procedures. A steady trend in
which one of these approaches is favoured by political decision-makers
or quality assurance agencies cannot be observed. On the contrary, the
most particular feature of external quality assurance in the EHEA is its
high level of dynamism and variety.
Two reasons can be given for this. Firstly, the design of external quality
assurance regimes is highly dependent on national political agendas
which would normally overrule the non-binding agreements within the
Bologna Process. In any case the ESG are not prescriptive in the sense of
favouring one particular approach.
Secondly, in the past few years one could – and still can – see the
definition of the purpose of quality assurance becoming vague or even
arbitrary. The traditional twin purposes of accountability and quality
enhancement have expanded to a wider array of additional or even
alternative purposes such as policy evaluation, transparency and so
on. The problem is that this discussion is somewhat unlinked from
the discussion about procedural design. In addition, in many cases no
discussion takes place as to whether or not external quality assurance
procedures are appropriate at all to the new or additional purpose.
Since the fundamentals of the current quality assurance procedures
were laid down, the field of external steering mechanisms in higher
education has developed enormously. In particular reporting schemes
produce a massive amount of data and information about programmes,
institutions and their performances and outcomes. However, the rela-
tionship between these tools and external quality assurance seems to
change only very slowly. One has to ask whether some of the additional
purposes of external quality assurance which have sneaked in might be
better served by some of the other external steering mechanisms.
Observations from the Agencies’ Perspective 229

Both reasons combined have created a situation in which external


quality assurance, although revised and refined in terms of procedural
details, is conducted in more or less the same way as fifteen to twenty
years ago when the procedures were defined for the evaluation of
programmes. However, they have to serve two or more purposes, such
as informing performance-based funding schemes, assessing whether
certain policy agendas were implemented successfully, providing infor-
mation for comparing institutions and so on. Indeed, currently one can
get the impression that we are trying to kill too many birds with one
stone. Hence in future debates about developing quality assurance, the
following basic questions need to be addressed more thoroughly than in
the past: what will be the purpose, or the main purpose of the quality
assurance process to be developed? What type of procedure is the most
appropriate to serve the defined purpose?

Notes
1. For case studies see Bernhard (2012).
2. See the report on the New Approach to Quality Assurance for 2011–2014:
http://www.hsv.se/download/18.328ff76512e968468bc80004249/1103R-
quality-evaluation-system-2011-2014.pdf (accessed 27 February 2013).
3. Unpublished draft of the revised ESG.

References
Bernhard, A. (2012) Quality Assurance in an International Higher Education Area: A
Case Study Approach and Comparative Analysis (Wiesbaden: Springer).
Brown, R. (2010) ‘The current Brouhaha about standards in England’, Quality in
Higher Education 16(2), 129–38.
Department of Business, Innovations and Skills (ed.) (2011) Higher Education:
Students at the Heart of the System, http://www.hepi.ac.uk/167-1987/Higher-
Education-Students-at-the-Heart-of-the-System-An-Analysis-of-the-Higher-
Education-White-Paper-.html (accessed 27 February 2013).
Education, Audiovisual and Culture Executive Agency (2012) The European Higher
Education Area in 2012: Bologna Process Implementation Report (Brussels: EACEA).
Costes, N., Crozier, F., Cullen, P., Griffol, J., Harris, N., Helle, E., Hopbach, A.,
Kekalainen, H., Knezevic, B., Sits, T. and Sohm, K. (2008) Quality Procedures
in the European Higher Education Area and Beyond: Second ENQA Survey (ENQA
Occasional papers 14), (Helsinki: ENQA).
Costes, N., Hopbach, A., Kekalainen, H., van Ijperen, R. and Walsh, P. (2011)
Quality Assurance and Transparency Tools, p. 12, http://www.enqa.eu/pubs_
workshop.lasso (accessed 27 February 2013).
ENQA (2009a) Standards and Guidelines for Quality Assurance in the European Higher
Education Area, 3rd edition (Helsinki: ENQA).
230 Quality Assurance in Higher Education

ENQA (2009b) ENQA position paper on quality assurance in the EHEA in view of the
Leuven meeting of ministers responsible for higher education of 28–29 April 2009,
http://www.enqa.eu/files/ENQA_position_paper%20%283%29.pdf (accessed
27 February 2013).
Harvey, L. (2006) ‘Understanding Quality’, in E. Froment, J. Kohler, L. Purser and
L. Wilson (eds), EUA Bologna Handbook: Making Bologna Work (Berlin: RAABE),
B4.1-1, pp. 2–26.
Hazelkorn, E. (2009) ‘The Emperor Has No Clothes? Rankings and the Shift from
Quality Assurance to World-Class Excellence’, in L. Bollaert, B. Carapinha,
B. Curvale, L. Harvey, E. Helle, H. Toft Jensen, T. Loukkola, B. Maguire,
B. Michalk, O. Oye and A. Sursock, A. (eds), Trends in Quality Assurance:
A Selection of Papers from the 3rd Quality Assurance Forum (Brussels: European
University Association), pp. 10–18.
Hopbach, A. (2012) ‘External Quality Assurance between European Consensus
and National Agendas’, in A. Curaj, P. Scott, L. Vlasceanu and L. Wilson (eds),
European Higher Education at the Crossroads: Between the Bologna Process and
National Reforms, Vol. 1 (Dordrecht: Springer), pp. 267–85.
Kristensen, B. (2010) ‘Has External Quality Assurance Actually Improved Quality
in Higher Education over the Course of 20 Years of the “Quality Revolution”?’
Quality in Higher Education, 16(2), 153–58.
Loukkola, T. (2012) ‘A Snapshot on the Internal Quality Assurance in EHEA’, in
A. Curaj, P. Scott, L. Vlasceanu and L. Wilson (eds), European Higher Education
at the Crossroads: Between the Bologna Process and National Reforms, Vol. 1
(Dordrecht: Springer), pp. 303–16.
Rauhvargers, A., Deane, C. and Pawels, W. (2009) Bologna Process Stocktaking
Report 2009. Report from working groups appointed by the Bologna Follow-up
Group to the Ministerial Conference in Leuven/Louvain-la-Neuve 28–29 April
2009, Brussels.
Reichert, S. (2009) Institutional Diversity in European Higher Education: Tensions
and Challenges for Policy Makers and Institutional Leaders (Brussels: European
University Association).
Serrano-Velarde, K. and Hopbach, A. (2007) ‘From Transnational Co-operation to
National Implementation’, in Hochschulrektorenkonferenz (ed.), The Quality
Assurance System for Higher Education at European and National Level. Bologna
Seminar Berlin, 15–16 February 2007, pp. 29–62.
Sursock, A. and Smidt, H. (2010) Trends 2010: A Decade of Change in European
Higher Education (Brussels: European University Association).
Trow, M. (1996) ‘Trust, Markets and Accountability in Higher Education’, Higher
Education Policy, 9(4), pp. 309–24.
Van Vught, F. (1994) ‘Intrinsic and Extrinsic Aspects of Quality Assessment in
Higher Education’, in D. Westerheijden, J. Brennan and P.A.M. Maassen (eds),
Changing Contexts of Quality Assessment: Recent Trends in Western European
Higher Education (Utrecht: Lemma B.V.), pp. 31–50.
Westerheijden, D., Berkens, E., Cremonini, L., Huisman, J., Khem, B., Kovac,
A., Lazetic, P., McCoshan, A., Mozuraityte, N., Souto-Otero, M., de Weert, E.,
White, J. and Yagci, Y. (2010) The Bologna Process Independent Assessment: The
First Decade of Working on the European Higher Education Area (Utrecht: Cheps,
Incher, Ecotec).
Part V
Conclusion
15
The Swiftly Moving Frontiers of
Quality Assurance
Alberto Amaral and Maria João Rosa

. . . noui consilia et ueteres quaecumque moneatis amici,


‘pone seram, cohibe.’ sed quis custodiet ipsos
custodies, qui nunc lasciuae furta puellae
hac mercede silent? crimen commune tacetur.
prospicit hoc prudens et ab illis incipit uxor.

. . . I am aware of whatever councils you old friends warn:


‘Bolt her in, constrain her!’ But who will guard
the guardians , who now keep silent the lapses of the loose
girl – paid off in the same coin? The common crime keeps its silence.
A prudent wife looks ahead and starts with them.
( Juvenal or Juvenalis, VI satire)

Generalised lack of trust

The emergence of the evaluative state (Neave, 1998) can be explained


as the consequence of a number of factors, including massification of
higher education, increasing market regulation, the emergence of new
public management and loss of trust in higher education institutions
and their professionals.
Governments have extended their lack of trust from higher education
institutions to quality agencies, which have provoked them into a fren-
zied activity of trying to demonstrate that their activities have a positive
impact on institutions and the quality of teaching they provide.
Neave (2004) initiates one of his inspired and inspiring papers with
the quotation from Juvenal, ‘quis custodiet ipsos custodies?’, which
can be translated as ‘who guards the guardians?’ thus referring to the
233
234 Quality Assurance in Higher Education

problem of governmental control of quality agencies. Juvenal, a Roman


poet who lived probably between the 2nd and 1st centuries BC, used
that phrase in his sixth satire, known as ‘Against the women’ or ‘Against
marriage’. Juvenal commented on the difficulties of ensuring the fidel-
ity of wives, a problem that could not be solved by locking them under
guard because this just raised the problem of guarding the guardians.
Neave uses Juvenal’s quotation as an allegory to the relations between
the Prince (government) and the guardians (quality agencies) that never
satisfied the former, namely when they had close ties with institutions.
This is the obvious case for quality agencies in the Netherlands, Flanders
and Portugal. This lack of satisfaction was not removed, even when
the Prince hired guardians for the guardians (meta-evaluation bodies,
such as the Inspectorate in the Netherlands or the Portuguese National
Council for Evaluation of Higher Education, CNAVES).
The Prince did not fall in with his courtiers and indefinitely stay
the execution of the guardians and meta-evaluators (Neave, 1994).
The cases of Portugal, Flanders and the Netherlands, where the quality
assessment agencies owned by the higher education institutions them-
selves have been replaced by independent accreditation agencies, show
that the Prince knew the quality assessment systems were not offering
a clear picture of the situation; or, as Juvenal argued, we cannot trust
guards because they are ‘keeping silent the lapses of the loose girl’. In
Flanders:

in the second half of the 1990s, criticisms began to be heard about


VLIR quality assurance system. Some policy makers, employers and
journalists questioned the vagueness of the visitation reports and the
lack of a clear overall conclusion. (Van Damme, 2004, p. 144)

And in Portugal:

the final reports . . . very seldom offer clear basis for drastic decisions. . . .
the Minister has publicly complained . . .that the conclusions of the
reports of quality evaluation agencies were quite obscure. (Amaral
and Rosa, 2004, pp. 415–16)

The yearnings of the Prince may also endanger what so far has been
a major characteristic of European quality assurance systems, namely
their autonomy from both governmental and institutional interfer-
ence. Stensaker argues in this volume that that independence may be
considered ‘a potential hindrance for effective national policy agendas’
The Swiftly Moving Frontiers of Quality Assurance 235

at a time when the Bologna Process is seeing its glamour becoming


somewhat tarnished. News from the US also indicates an emerging
desire of the federal level to play a more visible role in regulating higher
education by intervention in the accreditation system in order to ensure
increasing institutional accountability, which may strike a parallel with
the European situation (Eaton, present volume).
A 2009 report on progress in quality assurance in higher education,
produced by the European Commission, confirms that there is a cli-
mate of mistrust (European Commission, 2009). The report not only
argues that European quality agencies still need to demonstrate their
independence and professionalism to build trust among stakeholders,
as it remains unclear what being accredited in one country, even by
a registered agency, means in another and how the misuse of such an
accreditation could be prevented. The report goes even further, stating
there is some concern that agencies’ membership in ENQA or even
their registration in the EQAR might not generate the necessary level
of mutual trust.
For Neave (2004, p. 30), ‘the creation of new model accreditation agen-
cies added further to the apparatus of verification’, which corresponded
to ‘the replacement of a circle of trust and confidence with a cycle
of suspicion (Jeliaskova, 2001; Van Brueggen et al., 1998)’. Therefore
it is possible to argue that European higher education is in a kind of
schizophrenic situation. On the one hand there is rhetoric promoting
cooperation and trust, including the European dimension and, on the
other hand, quality mechanisms are apparently based on suspicion.
The new agencies in the Netherlands, Portugal and Flanders, includ-
ing those that are replacing agencies owned by higher education insti-
tutions, are accreditation agencies based on independence from both
the Prince and the institutions. Just as the husband in Juvenal’s satire
would never allow his wife to have a say in choosing her guardians,
the Prince seems determined not to fall again into the trap of allowing
evaluated institutions to choose their guardians and determine their
‘shifts’. However, the intervention of the EU in the area of European
accreditation may open another possibility: institutions will be allowed
to choose their guardians from a list of reliable bodies provided by the
European Quality Assurance Register (EQAR).

Permanent change and lack of convergence

A general conclusion from the debates undertaken in this book and the
conference from which it arose is the observation that quality assurance is
236 Quality Assurance in Higher Education

a very dynamic process, in permanent evolution and strongly influenced


by political developments. Some would say that it is in a state of
permanent flux.
In his chapter, Hopbach even questions if it is possible to define trends
or directions of change of quality assurance due to their almost random
behaviour, which interferes with the European Commission’s desire for
convergence. He suggests that greater relevance of learning outcomes
is the only trend visible at the procedural level. In some countries qua-
lity assurance is moving from programme accreditation to institutional
audits; in others it is moving the other way round, from audits to
accreditation, making apparent that national agendas are gaining over
the ‘European dimension’. Stensaker suggests in his chapter that one
could argue that the current changes in European external quality
assurance (EQA) are threatening the level of convergence needed to
maintain the European Higher Education Area. New instrumentalities
are being added, such as rankings, measuring learning outcomes and
risk management. New purposes are added to quality assurance beyond
the traditional ones of improvement and accountability. However, there
are two apparent trends in the relationship between higher education
institutions and governments and society: an increasing lack of trust
in institutions and a growing role of markets in public policy (Amaral,
present volume).
It seems that we are now arriving to a moment where national politi-
cal agendas are gaining a new momentum, which may lead to different
approaches to quality assurance in different European countries. This
trend may be seen in the national concerns with learning outcomes,
as previously mentioned, as well as in the development of risk-based
approaches or databases of national indicators, the idea behind them
being to save costs and simplify external quality assurance.
Hopbach lists in his chapter two explanations for the lack of conver-
gence. The first is the ‘European Implementation Dilemma’ (Serrano and
Hopbach, 2007, p. 37), a consequence of the use of the ‘open coordina-
tion method’ (OMC) (Veiga and Amaral 2009) as an instrument for imple-
menting European policies in areas of national sensitivity such as higher
education. The fact that no formal sanctions are foreseen (Radaelli, 2003;
Trubek and Trubek, 2005) and the use of what is in general a broader
rather than a detailed definition of objectives, leaves ample room for
interpretation and implementation within member states (Kröger, 2004).
OMC is used in areas of national political sensitivity where, unlike the
case of economic policy, convergence is not imperative (Dehousse,
2002, p. 10). Some authors argue that ‘the central aim of coordination
The Swiftly Moving Frontiers of Quality Assurance 237

is to encourage national reforms, convergence being seen as a side-effect


rather than as an end in itself’ (Biagi, 2000, p. 159) or ‘most coordination
processes are aimed at initiating or facilitating reforms to be conducted
at the national level’ (Dehousse, 2002, p. 10). Hemerijck considers that
‘The objective is not to achieve common policies, but rather to share
policy experiences and practices’ (Hemerijck, 2002, p. 40).
Hopbach’s second explanation is that there is not a ‘common under-
standing of the purpose of quality assurance’; rather it is becoming
vague or even arbitrary, as new purposes are added to the traditional
ones of accountability and improvement. This idea is echoed in Neave’s
chapter, which argues that ‘different nations attach different priorities
and purposes to quality assurance’ and that there is now a battery of
instruments serving an increasing number of agendas.
Interestingly, the idea of striving for convergence in developing
quality assurance mechanisms for higher education is not only a
European trend. In Latin America, efforts have been made in recent
years to harmonise quality assurance principles and procedures among
a number of different countries (see Lemaitre’s chapter in this book).
It seems that the efforts are now paying off with the development of a
regional community for quality assurance and the building of capacity
at various levels, namely existing and emerging agencies, academic
staff within higher education institutions, reviewers and policy-makers.
Furthermore, national agencies are working together towards the
harmonisation of standards and procedures for the assessment of under-
graduate and postgraduate programmes.

The new instrumentalities

Recent developments in quality assurance reveal an increasing push for


accountability in both the US and Europe. In Europe, the Ministers and
the European Commission, despite a trust-based rhetoric that focuses
on European cooperation around the construction of the European
Higher Education Area (EHEA) and on advancing the European dimen-
sion, all justified in the name of transparency, are in effect supporting
measures based upon an ethic of suspicion (accreditation systems,
rankings and so on) for the sake of transparency. These measures are
far removed from concerns with improving the quality of education
and research. In the US, as described in Eaton’s chapter, there are also
increasing demands for accountability and a stronger federal presence
in higher education. In what follows we analyse some of the new instru-
mentalities in the area of quality assurance.
238 Quality Assurance in Higher Education

Rankings
Ranking is one of the new instrumentalities being developed with the
blessing of the European Ministers and the European Commission
(Amaral, present volume). In April 2009, at the Leuven ministerial con-
ference on the Bologna process, European ministers in charge of higher
education endorsed the use of ‘multidimensional transparency tools’ to
identify and compare the relative strengths of higher education systems
and their institutions (Leuven communiqué, 2009). ‘Multidimensional
transparency tools’ is just a ‘weasel word’ (Amaral and Neave, 2009)
for rankings, and students present at Leuven strongly opposed their
implementation. As described in Ivanova’s chapter, national unions
of students believe classifications of higher education institutions may
become a double-edged sword as they are likely to open the way for
the Saint Matthew effect (For unto every one that hath shall be given,
and he shall have abundance: but from him that hath not shall be
taken away even that which he hath). This theme is also approached in
Westerheijden’s chapter.
This chapter is dedicated to describing the new tools U-Map and
U-Multirank, and to explaining their rationalities. These tools aim to
eliminate some of the flaws of traditional ranking systems by compar-
ing only institutions similar in their missions and structures while
addressing a number of their most important dimensions (education,
research, knowledge transfer, and regional and international aspects).
As Westerheijden reports, these tools are apparently having already
a positive influence over the current leaders of the ranking business,
who have adapted their methodologies to try to eliminate some of the
revealed flaws.

Learning outcomes
The OECD has tried to implement a different and innovative approach
based on measuring learning outcomes (Dias and Amaral, present
volume). This ambitious project, very demanding in human and finan-
cial resources, was initiated as an attempt to create a PISA-like system
for higher education. Indeed, OECD’s capacity to shape expert opinion
may in part be attributed to the regular publication of cross-national
and comparative educational statistics and indicators, since the organi-
sation lacks both financial clout and legislative capacity to coerce its
member countries.
OECD proposed developing a method ‘to assess learning outcomes
on an international scale by creating measures that would be valid for
The Swiftly Moving Frontiers of Quality Assurance 239

all cultures and languages’ (OECD, 2009) and embarked on a feasibility


study to demonstrate that this was possible. However, the study’s objec-
tives were not clearly stated. On the one hand, members of the AHELO
team stated that the purpose of the study was to verify the possibility
of implementing ‘an international assessment of higher education
learning outcomes that would allow comparison between HEIs across
countries’ (Yelland, 2008). On the other hand, OECD has made some
soothing declarations, saying that AHELO was not a university ranking,
and aimed instead to ‘identify and measure as many factors as possi-
ble influencing higher education, with the emphasis being always on
teaching and learning’ (OECD, 2009). These vague statements raised
some suspicion among students and European students unions, who
supported the AHELO initiative only if it did not lead to rankings
similar to PISA (Ivanova, present volume).
The feasibility study is now complete and was presented and publicly
discussed at a conference held in March 2013 in Paris. Unfortunately,
the results of the study were not very positive and a move to the next
implementation phase was postponed. The AHELO methodology is very
complex and expensive, and it was not demonstrated that it would be
feasible to create a system that was valid across all cultures and lan-
guages. To make things more difficult, there were some methodological
problems, and many students decided not to participate, leading to very
low rates of response in a number of significant cases.
Therefore, the OECD has so far not been able to increase its influence
by being able to claim, first, that the AHELO system for tertiary education
provides a clear comparison of universities by looking at competencies
of graduates that avoids much of the shortcomings of plain ranking
systems, and, second, putting pressure on its members to participate in
the exercise, as it did with PISA.

Risk management
Risk management has attracted considerable interest, and is referred to
by a number of contributors to this book (Amaral, Neave, Stensaker and
Hopbach), as well as being the subjecy of two dedicated chapters (those
by McClaran and Raban). Neave refers the Swedish system, where a
risk-based approach is used as an ‘alert’ system that allows external exam-
inations to focus only on those institutions that show obvious difficul-
ties (Högskolverket, 2005). This coincides with Stensaker’s opinion that
the risk-based approach is a procedure for identifying programmes or
institutions at risk. Hopbach criticises risk-based approaches for placing
less emphasis upon the developmental dimension of quality assurance.
240 Quality Assurance in Higher Education

Amaral stresses the differences in the use of the risk-management approach


between Scotland and the rest of the UK.
McClaran presents a detailed description of the implementation
process of the risk-based system in the UK, which aims to detect risk
situations in programmes and institutions. A ‘Concerns Scheme’ was
implemented to safeguard quality and standards between reviews,
which allowed the QAA to conduct immediately a review whenever
there were concerns about compliance with quality standards (Amaral,
present volume). The White Paper Higher Education: Students at the Heart
of the System (BIS, 2011, p. 37) proposes ‘giving power to students to
hold universities to account’. As McClaran describes in his chapter,
students in the new system will continue to be at the heart of the pro-
cess, participating as members of the external review teams or offering
evidence to review teams through ‘student written submissions’.
Raban’s chapter presents a theoretical approach to risk-management
that draws a clear distinction between the approach used by QAA in the
UK and a risk-management approach fit for institutional purpose. For
Raban, despite ‘The hyperbole that accompanied the launch of QAA’s
risk-based approach . . . it amounts to little more than a system of triage
in which under-performing institutions will be selected for closer scru-
tiny’. Therefore, it is very unlikely that this risk-based approach could
be compatible with quality enhancement of the whole system (Amaral,
present volume). Indeed, the White Paper Higher Education: Students at
the Heart of the System includes a proposal for ‘a genuinely risk-based
approach, focusing QAA effort where it will have most impact and
giving students power to hold universities to account’ (BIS, 2011, p. 37).
On the one hand it is possible that trust in institutions is in danger of
being sacrificed to the aim of appeasing students who were recently
asked to pay a larger contribution to the costs of education. On the
other hand, the risk-based approach may create difficulties in ensuring
that the new system will address quality enhancement of the whole
system, as robust quality assurance procedures will be focused on detecting
and eliminating those cases where quality standards are at risk.
Raban argues in his chapter that the UK approach to risk management
will not support [institutions] in managing what Neave in his chapter
calls the increasingly risky business of higher education. Raban there-
fore proposes a quite different approach:

Although risk-based regulation reinforces a move in a market direction


without weakening the accountability of universities to the State, a
risk management approach to internal quality assurance would imply
The Swiftly Moving Frontiers of Quality Assurance 241

some element of trust – an empowerment of front line staff and a


relationship of mutual accountability between front line staff and
their managers. (Raban, present volume)

Raban also mentions two essential cultural requirements for success-


ful risk management, the first being openness to challenge and the
second being acceptance of failure. He quotes from the report of the
EUA project, Quality Assurance for the Higher Education Change Agenda
(QAHECA): ‘external quality assurance should aim at checking if an HEI
is capable of reacting to abnormal circumstances rather than sanction-
ing occasional failures’. As mentioned by Amaral in his chapter, the
Scottish Quality Enhancement Framework follows this latter approach
by recognising that enhancement is the result of change and innova-
tion, with inherent risks that institutions should manage in order to
protect their students.

Quality enhancement
So far, only the quality enhancement approach seems to be trying to
restore trust in institutions, although it is not clear if it will succeed.
In the UK, Australia, New Zealand and the United States, experiments
with this approach may be seen as universities seeking to regain trust
by reasserting that quality remains their major responsibility whilst
the role of external agencies should be confined to quality audit. From
this perspective, quality enhancement repatriates responsibility for the
quality of learning processes back to the institution. External oversight
may thus come to rely on institutional audit and less on such intru-
sive forms of quality assessment as programme level accreditation,
endorsing a flexible, negotiated model of evaluation that by definition
is non-mandatory, and is shaped by those participating in the acts of
teaching and learning.
The theme of quality enhancement has also attracted a large
number of authors (see the chapters by Amaral, Neave, McClaran,
Raban, Stensaker, Rosa, Ivanova and Hopbach) and the book includes
one chapter dedicated to this subject by Saunders. Neave sees quality
enhancement as a third phase in the evolution of the evaluative state,
increasing accountability to students. Rosa argues that Portuguese
academics show a preference for the characteristics of quality enhance-
ment, rather than for accreditation systems or rankings, an observation
that is corroborated by other authors (Laughton, 2003; Newton, 2000;
Saunders, present volume). Raban holds a rather sceptical view on the
virtuous nature of quality enhancement, questioning whether it reflects
242 Quality Assurance in Higher Education

a more ‘trusting’ mode of engagement between quality agencies and


institutions and suggesting that even in Scotland, enhancement tends to
be treated as an adjunct to conventional forms of quality assurance, with
some emphasis on accountability. Neave also mentions that one may see
quality enhancement as a sub-set of or an ‘add on’ to quality assurance.
In his chapter Saunders presents a detailed and critical view of the
Scottish quality enhancement experience. He mentions that there is a
sense of ownership among the higher education ‘community’, or at least
among senior education managers, with the Scottish approach including
components of the three possible dimensions of enhancement: incre-
mentalism, innovative incrementalism and transformation. Saunders
argues that the Quality Enhancement Framework (QEF) highlighted that
the purpose of quality systems in higher education is to improve students’
experiences, including their learning, while promoting a theory of educa-
tional change that gives more emphasis to consensual approaches rather
than the more coercive methods ingrained in some quality agencies.
Saunders presents QEF as both high (prescriptive) and low (open)
in fidelity. Low fidelity allows institutions to move from assurance to
enhancement and to adapt this shift to express their own institutional
cultures and systems. High fidelity was present in the definition of
those themes that formed the main emphasis across the sector irrespec-
tive of institutional priorities, consequently confirming the view in
Raban’s chapter that institutions are expected to address Agency defined
‘enhancement themes’. Saunders adds that ‘the lower the policy is in
fidelity, the better it travels and is reconstructed – or ‘translated’. . . –
on the ground and is capable of being adapted to the “real politique”
of situated change’. This also means that under those conditions it is
difficult to impose standardisation or control over the details of changes
across a whole system.

The stakeholder model of quality

Hopbach’s chapter refers to the stakeholder model of quality (Harvey,


2006), which was promoted by the European Standards and Guidelines
(ESG). The stakeholder model recognises the diverse approaches of rele-
vant stakeholders – including academics, students and quality agencies –
to the nature of quality in higher education.

The academic constituency


Rosa’s chapter presents the academic constituency and factors promoting
its resistance or adhesion to quality assurance. Empirical results from
The Swiftly Moving Frontiers of Quality Assurance 243

Portuguese academics show that in general they support the improvement


purpose of quality assurance, preferring a formative type of quality
assessment while opposing a connection between the results of assess-
ment and sanctioning mechanisms, or the use of quality mechanisms
as control tools. They see the control purpose as endangering academic
autonomy and potentially hindering innovation. Raban’s chapter also
mentions that academics tend to resist the requirements of any quality
assurance system that they regard as an instrument of management
control.

The student estate


Ivanova’s chapter presents the views of students on recent developments
in quality assurance in Europe. Students are not supportive of more flex-
ible and lean quality assurance processes, and favour the accreditation
of individual study programmes rather than institutional audits. They
are also critical of ranking systems and of the slow implementation
pace of the ESG, and they are unsatisfied with the remaining difficul-
ties for the participation of students in quality assurance processes,
namely in relation to external assessment activities. Other authors are
also strongly supportive of a reinforced presence of students in quality
assurance, as foreseen in the ESG, which is reinforced by the move-
ment towards Quality Enhancement, defined as a means for improving
the quality of learning opportunities, thus reinforcing accountability
towards students. In his chapter McClaran stresses the promotion of
the role of students in quality assurance and enhancement activities.
Hopbach argues that student involvement is critical in all phases and
that students are moving more into the focus. Raban considers that
students should contribute to risk management as responsible members
of an institution, and not as its customers. Finally, Saunders describes
the greater involvement of students in quality activities promoted by
the Scottish system.

Quality agencies
Quality agencies are an important stakeholder and in Europe their
representative organisation has played a relevant political role namely
in defining the ESG and in setting EQAR. Hopbach’s chapter refers to
the fact that quality assurance procedures have remained more or less
stable over the last few decades although a number of different purposes
have been added. This raises the question of defining in the future
what is the main purpose of the quality assurance process and what is
the most appropriate procedure for it. This question is also addressed
244 Quality Assurance in Higher Education

in Lemaitre’s chapter, which refers to the need for updated and revised
standards, procedures and practices of quality assurance agencies, and is
echoed by that of Stensaker and Eaton, who stress the need for innova-
tion in external quality assurance and accreditation.
Agencies also have to deal with the problem of declining trust in the
positive and effective impact of their activities. This is a problem clearly
raised in Eaton’s chapter, which considers the dangers of increased
government control and the threat to the core values that accom-
pany accreditation resulting from the current focus on the utilitarian
in higher education. This recalls Cardinal Newman (1996), who was
fiercely against a utilitarian concept of higher education that ignored
the true and adequate objectives of higher education, including intel-
lectual training and the development of a pure and clear atmosphere
of thought.

Quality assurance outside Europe

Finally we cannot ignore what is going on outside Europe. Eaton’s


chapter describes recent trends in US accreditation, including increas-
ing demands for accountability, stronger government presence and the
advance of a utilitarian approach to higher education and a diminishing
faith in traditional core values of higher education and its accreditation
system. Eaton also refers to the emerging phenomenon of massive open
online courses (MOOCs), whose consequences are still unforeseeable, as
well as the influence of international developments, namely those related
to the Bologna Process in Europe. Eaton then makes proposals to ensure
a more desirable future for higher education, avoiding what will be the
likely future for US accreditation if no pre-emptive measures are taken.
Lemaitre’s chapter refers to the trends and challenges of higher
education in Latin America. In this region the major trends of higher
education systems are not very different from those in other areas of
the world: expansion of tertiary education systems, diversification of
provision, more heterogeneous student bodies, new funding arrange-
ments, increasing focus on accountability and performance, new forms of
institutional governance, global networking, mobility and collaboration.
As in Europe and in the US there has been a decline in public credibility
of higher education as a consequence of massification and the emergence
of a much more heterogeneous provision of higher education: new
programmes, new providers, new governance strategies and so on. Latin
America has reacted to the new challenges in terms of quality at three
levels: national quality assurance systems, sub-regional arrangements
The Swiftly Moving Frontiers of Quality Assurance 245

and a regional network. In general there is agreement about the positive


impact of these systems, although there were also some critical remarks
such as the use of homogeneous quality standards where there are very
different types of institutions, the risk of an increasingly bureaucratic
and formal approach, and complaints about biases, lack of academic
judgement or inconsistent reports of external reviews.

The future: challenging issues and unsolved questions

Besides these recent developments, which frame this book by establishing


the context for the contemporary debates on quality assurance and
higher education, other issues that have been touched upon deserve
to be mentioned. One is the fact that quality assurance continues to a
certain extent, and despite all changes and improvements, to be incapa-
ble of effectively addressing the core elements of academic endeavour,
that is, knowledge creation and student learning (Harvey and Newton,
2006). More emphasis is being placed on the core of the quality prob-
lem and is trying to address more effectively these two elements. This
is especially evident in the OECD’s initiative for assessing learning
outcomes, but not only there. The text of the ESG also subscribes to
the view that student assessment procedures should determine how far
intended learning outcomes and other programme objectives have been
met. And in some European countries efforts are being made to link
qualification frameworks to existing external quality assurance systems,
the idea behind being to use learning outcomes to assess, evaluate and
accredit higher education institutions. But while the OECD initiative
seems to have been cancelled for the moment (Dias and Amaral, present
volume), the remaining approaches, although they emphasise learning
outcomes, seem somehow reductionist in terms of a truly quality man-
agement paradigm since they tend to take into account only the final
output (students’ work), leaving aside the process of production and the
inputs that requires.
Another interesting issue is the basic idea that the ultimate respon-
sibility for quality rests with higher education institutions. This
has led these institutions to work on the development of their own
internal quality assurance (or should we say management?) systems.
These systems should not be apart from but rather embedded in the
institutions’ regular governance and management approaches. In this
context the role of external quality assurance agencies may change and a
consultancy activity may well be added to their normal functions of
quality assessment, accreditation and/or audit (Bjorn, present volume).
246 Quality Assurance in Higher Education

Although procedures for quality assurance have been constantly


revised and refined in the last decades, the methodological foundation
of external quality assurance hasn’t changed substantially since the
1990s. Improvement and accountability have always been the basis
for procedural design. But these two traditional purposes seem to have
expanded to additional or even alternative ones, such as transparency,
communication or policy evaluation, and it is not clear how far the
present procedures are adequate to serve them (Hopbach, present
volume). Neither it is clear if other external steering mechanisms,
such as traditional rankings, the new transparency tools (U-Map and
U-Multirank) or database repositories of indicators are better fit to
address them. External quality assurance, whether quality assessment,
accreditation and/or institutional/academic audits, may indeed become
less likely to be the dominant method of judging academic quality, at
the risk of becoming merely one amongst a number of voices judging
quality (Eaton, present volume).
Management fads in higher education appear to follow the cycle
of educational innovations in general: ‘early enthusiasm, widespread
dissemination, subsequent disappointment, and eventual decline’
(Birnbaum, 2001, p. 5). Therefore:

the lesson that we might draw is that anyone wishing to import into
the academic domain a commercially derived approach to quality
management must respect the sensitivities of staff and the reali-
ties of university life if this approach is to have an impact beyond
those parts of our institutions that are responsible for their corporate
functions. (Raban et al., 2005, p. 54)

That is why both ‘trust – building on staff’s professional values and their
aspirations – and dialogic accountability are themselves preconditions
for enhancement, risk assessment and the effective local management
of risk’ (Raban et al., 2005, p. 50).
Martin Trow argues that claims to professional and personal respon-
sibility ‘were the basis on which academics in elite colleges and
universities in all countries formerly escaped most formal external
accountability for their work as teachers and scholars (Trow, 1996).
However, universities have rested too long on their claims for the spe-
cialism of their ‘unique’ attributes while their environment has changed
dramatically. Today society no longer understands university attributes
such as ‘academic freedom, the teaching and modelling of civic commu-
nities marked by civil discourse, dispassionate enquiry and community
The Swiftly Moving Frontiers of Quality Assurance 247

service’ (Futures Project, 2001, p. 11). Society is no longer prepared to


accept that academics are particularly able to run their complex insti-
tutions and is instead suggesting that private sector managers may do
a better job. Universities have failed to make politicians understand
their case. As Lamar Alexander, US Republican Senator for Tennessee,
explains a propos of the recent debates on the 2007 re-authorisation of
the American Higher Education Act:

Congress simply doesn’t understand the importance of autonomy,


excellence and choice, and the higher education community hasn’t
bothered to explain it in plain English to members who need to hear
it and understand it. (Alexander, 2008)

To regain trust, universities must convince society that they deserve it


by self-imposing ‘measures of quality, commitments to insuring access
and greater transparency about financing’ (Futures Project, 2001, p. 10)
and by ‘developing outcomes-based approaches to judging institu-
tional effectiveness (Eaton, 2007) in answer to increasing demands for
accountability.
In Europe universities are not completely alone in this process.
Although international organisations are in general strong suppor-
ters of market values and economic concerns, there are some political
organisations that strongly support ideas based on values that every
academic would like to defend (Amaral, 2008). One such organisation is
the Council of Europe, which has produced two important documents,
one on Public Responsibility for Higher Education and Research (Weber
and Bergan, 2005), the other on Higher Education Governance (Kohler
et al., 2006). These documents promote two fundamental ideas: that
governance should avoid micromanagement, leaving reasonable scope
for innovation and flexibility, and that quality assessment mechanisms
should be built on trust and give due regard to internal quality develop-
ment processes.
The big question is whether universities will succeed under the
present context of suspicion. As Trow reminds us, ‘Trust cannot be
demanded but must be freely given. In Trollope’s novels, a gentleman
who demands to be treated as a gentleman is almost certainly no gentle-
man’ (1996, p. 10).
But there is also a lack of trust in those agencies responsible for
national quality assurance systems. The recent failure of credit rating
agencies that triggered the worst financial crisis in decades, when they
were forced to downgrade the inflated ratings they gave to complex
248 Quality Assurance in Higher Education

mortgage-backed securities, has created a general feeling of mistrust in


regulators.
Not being seers we cannot predict the future. Some of us have offered
diverse alternatives for the future of quality assurance. However, the
present context is one of turmoil and uncertainty. A wise proposition
will be to call our attention to the need to observe closely and continu-
ously the evolution of developments such as rankings, learning out-
comes, quality enhancement and risk management. As mentioned in
Lemaitre’s chapter, higher education is a dynamic system – it cannot be
served well by quality assurance processes that are not prepared to learn
(and to unlearn), to adapt and adjust to the changing needs of students,
institutions and society.

References
Alexander, L. (2008) United States Senate, Press release, 30 January, 2008.
Amaral, A. (2008) Quality Assurance: Role, legitimacy, responsibilities and means
of public authorities. In Weber, L. and Dolgova-Dreyer (eds), The legitimacy
of quality assurance in higher education (Strasbourg: Council of Europe Higher
Education Series No. 9), pp. 31–47.
Amaral, A. and Neave, G. (2009) ‘On Bologna, Weasels and Creeping
Competence’, in A. Amaral, G. Neave, C. Musselin and P.A.M. Maassen (eds),
European Integration and Governance of Higher Education and Research (Dordrecht:
Springer), pp. 281–99.
Amaral, A. and Rosa, M.J. (2004) ‘Portugal: Professional and Academic
Accreditation – The Impossible Marriage?’, in Schwarz, S. and Westerheijden,
D. (eds), Accreditation and Evaluation in the European Higher Education Area
(Dordrecht: Kluwer Academic Press), pp. 127–57.
Biagi, M. (2000) ‘The Impact of European Employment Strategy on the Role
of Labour Law and Industrial Relations’, International Journal of Comparative
Labour Law and Industrial Relations, 16(2), 155–73.
Birnbaum, R. (2001) Management Fads in Higher Education (San Francisco:
Jossey-Bass).
BIS (2011) Higher Education. Students at the Heart of the System (London: Stationery
Office).
Dehousse, R. (2002) ‘The Open Method of Coordination: A New Policy
Paradigm?’ Paper presented at the First Pan-European Conference on European
Union Politics, The Politics of European Integration: Academic Acquis and Future
Challenges, Bordeaux, 26–28 September 2002.
Eaton, J.S. (2007) ‘Institutions, accreditors, and the federal government:
Redefining their appropriate relationship’, Change, Vol. 39(5), 16–23.
European Commission (2009) Report on Progress in Quality Assurance in Higher
Education (COM (2009) 487 final) (Brussels: European Commission).
Futures Project (2001) Final Report of ‘Privileges Lost, Responsibilities Gained:
Reconstructing Higher Education’, A Global Symposium in the Future of Higher
Education, New York, Columbia University Teachers College, 14–15 June.
The Swiftly Moving Frontiers of Quality Assurance 249

Harvey, L. (2006) ‘Understanding Quality’, in E. Froment, J. Kohler, L. Purser and


L. Wilson (eds), EUA Bologna Handbook: Making Bologna Work, Berlin, B 4.1.1.
Harvey, L. and Newton, J. (2006) ‘Transforming Quality Evaluation: Moving
On’, in D. Westerheijden, B. Stensaker and M.J. Rosa (eds), Quality Assurance
in Higher Education: Trends in Regulation, Translation and Transformation
(Dordrecht: Springer), pp. 225–46.
Hemerijck, A. (2002) ‘The Self-transformation of the European Social Model(s)’,
http://www.eucenter.wisc.edu/OMC/Papers/Hemerijck.pdf (accessed
22 September 2013).
Högskolverket (2005) The Evaluation Activities of the National Agency for Higher
Education in Sweden. Final Report by the International Advisory Board. 38R.
(Stockholm: Högskolverket).
Jeliaskova, M. (2001) ‘Running the Maze: Interpreting External Reviews
Recommendations’, Quality in Higher Education, 8(1), 89–96.
Kohler, J., Huber, J. and Bergan, S. (2006) Higher Education Governance between
Democratic Culture, Academic Aspirations and Market Forces (Strasbourg: Council
of Europe Higher Education Series No. 5).
Kröger, S. (2004) ‘Let’s Talk about It – Theorizing the OMC (Inclusion) in Light
of Its Real Life Application’. Paper presented at the doctoral meeting of the
Jean Monnet chair of the Institut d’Études Politiques in Paris, section ‘Public
Policies in Europe’, Paris, 11 June.
Laughton, D. (2003) ‘Why Was the QAA Approach to Teaching Quality
Assessment Rejected by Academics in UK HE?’, Assessment and Evaluation in
Higher Education, 28(3), 309–21.
Neave, G. (1994) ‘The Policies of Quality: Development in higher education in
Western Europe 1992–1994’, European Journal of Education, 29(2), 115–34.
Neave, G. (1998) ‘The Evaluative State Reconsidered’, European Journal of
Education, 33(3), 265–84.
Neave, G. (2004), ‘The Bologna Process and the Evaluative State: A Viticultural
Parable’, in Managerialism and Evaluation in Higher Education, UNESCO Forum
Occasional Paper Series, no. 7, Paris, November 2004 (ED-2006/WS/47),
pp. 11–34.
Newman, J.H. (1996) in F. Turner (ed.), The Idea of the University, Defined and
Illustrated, Rethinking the Western Tradition (New Haven: Yale University Press).
Newton, J. (2000) ‘Feeding the Beast or Improving Quality? Academics’
Perceptions of Quality Assurance and Quality Monitoring’, Quality in Higher
Education, 6(2), 153–62.
OECD (2009) Assessment of Higher Education Learning Outcomes (Paris: OECD).
Raban, C., Gower, B., Martin, J., Stoney, C., Timms, D., Tinsley, R. and
Turner, E. (2005) ‘Risk Management Report’, http://www.edgehill.ac.uk/aqdu/
files/2012/08/QualityRiskManagementReport.pdf (accessed 22 September
2013).
Radaelli, C. (2003) The Open Method of Coordination: A New Governance Architecture
for the European Union? (Stockholm: The Swedish Institute for European Policy
Studies).
Serrano-Velarde, K. and Hopbach, A. (2007) ‘From Transnational Co-operation to
National Implementation’, in Hochschulrektorenkonferenz (ed.), The Quality
Assurance System for Higher Education at European and National Level. Bologna
Seminar Berlin, 15–16 February 2007, pp. 29–62.
250 Quality Assurance in Higher Education

Trow, M. (1996) ‘Trust, Markets and Accountability in Higher Education:


A Comparative Perspective’, Higher Education Policy, 9(4), 309–24.
Trubek, D.M. and Trubek, L.G. (2005) ‘Hard and Soft Law in the Construction of
Social Europe: The Role of the Open Method of Co-ordination’. European Law
Journal, 11(3), 343–64.
Van Brueggen, J.C., Scheele, J. and Westerheijden, D. (1998) ‘To be Continued . . .
Synthesis and Trends’, in J. Scheele, P. Maassen and D. Westerheijden (eds),
To Be Continued: Follow Up of Quality Assurance in Higher Education (Maassen:
Elsevier/de Tijdstroom), pp. 87–99.
Van Damme (2004) ‘Quality Assurance and Accreditation in the Flemish
Community of Belgium’, in S. Schwarz and D. Westerheijden (eds), Accreditation
and Evaluation in the European Higher Education Area (Dordrecht: Kluwer
Academic Press), pp. 127–57.
Veiga, A. and Amaral, A. (2009) ‘Policy Implementation Tools and European
Governance’, in A. Amaral, G. Neave, C. Musselin and P.A.M. Maassen (eds),
European Integration and Governance of Higher Education and Research (Dordrecht:
Springer), pp. 127–51.
Weber, L. and Bergan, S. (eds) (2005) The Public Responsibility for Higher Education
and Research (Strasbourg: Council of Europe Higher Education Series No. 2).
Yelland, R. (2008) ‘OECD Initiative on Assessing Higher Education Learning
Outcomes (AHELO)’, International Association of Universities Conference,
Utrecht, 18 July.
Index

academic European, 235


constituency, 182, 242 institutional, 174, 220
freedom, 83, 150, 152, 155–8, 185, (linked to accountability and
197, 246 control), 14–15, 53 150–1, 157,
quality, 7, 140, 149–50, 152, 154, 165, 182, 221, 244
157–8 of MOOCs, 26, 154
measures of, 3, 18 methods, procedures, activities,
staff, 92–3, 99, 164, 167–8, 171–5, 140, 144–5, 149–50, 226
184, 197–8, 237 programme, 23, 114, 174, 199,
standards, 26, 92, 99, 111, 115, 222 220–2, 236, 241, 243
academics process, 53, 168
perceptions (perspectives), 8, 181, quality assurance and, 3, 14, 53,
183–9, 193–9, 241–3 199, 220, 222, 228, 244, 246
view of, 1–3, 26, 156 regional mechanism, 165, 167
roles review, 149–50
(in quality assurance), 14, 28, schemes, mechanisms, 3,15, 168,
117, 130, 140, 144, 150, 156 172
(in higher education), 15, 22–3, system(s), 183, 186, 222, 235, 237,
26, 181, 246–7 241, 244
see also academic staff US, 7, 149–58, 244
accountability Australian Council for Educational
collegial, 184 Research (ACER), 82
culture, 130 Association pour l’Evaluation de la
demands (calls) for, 7, 157–8, 181, Recherche et l’Enseignement
211, 221–2, 237, 244, 247 Superieur (AERES), 41
driving force of, 150 Assessment of Higher Education
forms of, 91 Learning Outcomes (AHELO),
greater, 150–1, 155–6, 158 ‘feasibility study’ of, 4, 21, 27, 67,
institutional, 98, 235 76–7, 79, 81–2, 84
leadership for, 7, 158 financial problems, 22, 80
measures, 15, 164 full-scale, 22, 27, 75–6, 84
mechanisms for, 162 project, 1, 3, 21, 27, 67, 72–3, 75,
perspective, 139–40, 142 212, 239
practices, 151, 157 rationale, 73–4
(purpose of), 9, 13–14, 22, 46, 54, reasons for the, 72–3
84, 90, 117, 165, 184, 200, 224, (the project is about), 4, 22, 67,
226–8, 236–7, 246 73–6, 213, 239
(in a risk-based approach), 89–90, see also learning outcomes
98, 100–1, 240–1 audits, 221, 228, 246
towards students, 241, 243 institutional, 23, 106, 183, 199,
accreditation 236, 243
agencies (councils), 18, 26–7, 70, QAA, 24, 97, 222
208, 234–5 quality, 22, 221

251
252 Index

Bergen (Ministerial Conference), 2, management and, 97, 99, 130, 243


18, 69, 70, 209, 211 of higher education, 121, 200
Berlin, 68, 70, 210 over the changes, 127, 242
principles, 19 purpose of, 189, 193–4, 196, 200, 243
Bologna, 9, 68, 71, 170, 217–8 quality, 79, 165
Declaration, 2, 16, 6–8, 71, 75, 182 quality assurance, 14, 173
Process, 8, 18, 27, 71, 73, 135–6, system, 79, 147
142, 154, 207, 211, 216–18, core values, 7, 150, 155, 158, 184–5,
235, 244 244
ESG, 143, 220 constructed-response tasks (CRTs), 81
learning outcomes, 67–9 Council for Higher Education in
Lisbon agenda, 20, 36–7 Central America (CSUCA), 167
quality assurance, 136, 142, 208,
217, 223, 288 discipline-related skills, 4, 21, 77
reports, 208–9, 211–12 Dublin descriptors, 68
students in the, 8, 207
E4 Group, 18, 70, 208, 218
Council for Aid to Education (CAE), European Area of Higher Education
82 (EAHE), 20
Carnegie European Credit Transfer System, 68,
classification, 56–7, 59 217
Foundation, 19, 56 Education Policy Committee (EDPC),
Central American Council for 84
Accreditation (CCA), 167 ELIR (Enhancement Led Institutional
Council for Higher Education Review), 118–9
Accreditation (CHEA), 26 European Association for Quality
Centro Interuniversitario de Assurance in Higher Education
Desarrollo (CINDA), 164 (ENQA), 2, 19, 136, 146, 208,
Collegiate Learning Assessment (CLA), 219, 223, 235
21, 77, 79, 81–2 E4, 18, 70
Conselho Nacional de Avaliação do ESG, 18, 69, 70, 218, 219
Ensino Superior (CNAVES), 234 surveys, 139, 220–21
see also Higher Education European Quality Assurance Register
Assessment National Council, for Higher Education (EQAR),
182 136, 146, 235
Comité National d’Evaluation (CNE), establishment of, 2, 18, 137, 142,
38 208, 209, 243
communication, 55, 82, 98, 158, 198, European Standards and Guidelines
211 (ESG), 145, 216–19, 224, 228,
purpose of, 189, 193–4, 246 242
written, 21, 77 adoption of, 2, 18, 208, 211, 217,
compliance, 24, 67, 99, 101, 151, 184 218
-inducing process, 118, 122 registration of agencies, 70, 219–20
mechanisms, 3, 18 ENQA, 136, 137, 218
with ESG, 70, 182, 219–20, 240 external quality assurance, 141,
with quality standards, 26, 118, 157 143–4, 209, 219–20, 222–3
control, 8, 43, 73, 92, 121, 129, 164, establishment of, 136, 208
200, 221 implementation of, 218–20, 223, 228
governmental, 234, 244 learning outcomes, 22, 69, 219, 245
Index 253

European Student Information Bureau quality, 26–7, 135, 156, 172, 175,
(ESIB), 69, 70, 207 182–3, 188, 198–9
European Students’ Union (SEU), 18, self-, 84, 184
207–8, 213, 218 indicators, 19, 28, 37, 40, 42, 56, 58–9,
European Association of Institutions 141, 154, 162, 173–4, 176, 236, 246
in Higher Education performance, 3, 16, 18, 42, 59, 62
(EURASHE), 18–9, 69, 70, 218 U-Map, 59, 61, 63
Evaluative State, 1–4, 13, 32–5, 37–46, U-Multirank, 62–3
233, 241 educational, 64, 72, 238
see also accreditation
feasibility study, 4, 5, 19, 21–2, 27, innovation, 53, 101, 119, 129, 144,
66–7, 75–84, 239 150, 153, 155–8, 246–7
see also AHELO and creativity, 45, 143, 146
Framework for Qualifications of the change and, 6, 7, 24, 241
European Higher Education EQA, 7, 142–3, 147, 172, 244
Area (FQ-EHEA), 68 purpose of, 26, 189, 193, 195, 200,
243
generic skills, 4, 21, 77–9, 81–3 quality enhancement and, 24, 99
Group of National Experts (GNE), 80
governance, 39, 99, 174, 197, 214, Latin America, 3, 7, 25, 160–1, 164,
216, 247 170, 237, 244
approaches to, 73, 245 countries, 165, 167–9
bodies, 16, 194, 197, 200 quality assurance, 165, 176
corporate, 24, 94 US and, 1, 6
institutional, 163, 244 leadership, 21, 54, 77
instrumentality of, 71, 72 academic, 7, 150, 156–8
(linked to NPM), 23, 26 institutional, 47, 145, 155–6, 163,
(linked to quality assurance), 137, 197, 224
141–2 learning outcomes, 68–70, 83, 140, 144,
strategies, 163, 244 173, 213, 219, 236, 245, 248
assessment of, 4, 21, 72–3, 78, 90,
Higher Education Funding Council for 238–9, 245
England (HEFCE), 24, 112–114 Bologna and, 67–8, 71, 74
consultation, 96, 108–10 cognitive, 20, 74
publications from the, 5, 89, 113–14 ESG, 22, 69, 223
implementation of, 70–1
improvement, 67, 98–9, 117, 122, intended, 22, 70, 219, 223, 245
130, 138, 172, 245 measurement of, 1, 4, 21, 27, 67,
continuous, 8, 109, 168, 183, 224 72, 74–5, 77–8, 236, 238
in teaching and learning, 172, 193 qualification framework, 69, 71
of higher education institutions, standardised, 20, 74
66, 189 student, 22, 74, 77–9, 81–2
of higher education systems, 7, 176 licensing, 14, 16, 165, 182
of student experience, 66, 77, 118 see also accreditation
of student involvement, 8, 109,
207, 210, 214 ministerial conference (and
purpose of, 8, 14, 22, 27, 111, communiqué)
142–3, 152, 165, 175, 182–3, Leuven, 19, 26, 69, 211, 220, 238
189, 193–5, 199, 237, 243, 246 London, 2, 18, 69, 70
254 Index

mapping, 19, 40, 139, 212 193–6, 199, 200, 222, 234, 241,
markets, 16, 17, 22, 89, 91, 146, 236 243, 245–247
Multiple Choice Question (MCQ), 82 assurance, 1–9, 13–15, 22–3,
Mercado Común del Sur 25–8, 41–2, 44–5, 54, 66–7, 70,
(MERCOSUR), 165, 167 89–97, 99, 100, 107, 109–11,
Massive Open Online Courses 113–14, 121–2, 135–7, 143,
(MOOCs), 26, 153–4, 244 147, 154–5, 162, 164–5,
motivation, 84, 189, 193–4, 196–7, 227 167–76, 183–6, 195, 197–200,
multidimensional, 4, 19, 27, 53, 58–9, 207, 211–12, 214, 216–18,
61, 64, 238 220–29, 234–37, 239–48
agencies, 3, 15, 136–7, 164, 168,
neo liberal, 18, 34 174–76, 208, 217–20, 224, 228,
neo liberalism, 38–41, 71 244–5
New Public Management, 13, 15, 22–3, external, 6, 66, 101, 111, 135, 198,
27, 34, 39–41, 138, 181, 233 217–29, 236, 241, 244–46
internal, 6, 100, 140, 147, 183, 198,
Organisation for Economic 218, 220, 222, 240, 245
Co-operation and Development risk-based, 5, 101, 107–8, 110–11,
(OECD), 1, 4, 20–2, 27–8, 66–7, 113, 222
71–8, 80–3, 160–1, 163, 238–9, enhancement, 1, 3–6, 9, 22–5,
245 27–8, 32–3, 35, 37–8,
41–6, 182, 199, 200,
performance, 19, 34, 40, 43, 53, 55, 213–14, 224, 226, 228,
73–4, 79, 100, 162, 186, 225, 240–3, 248
227, 244 quasi-markets, 17
(performance-based) funding, 162,
212, 226–7, 229 rankings, 4, 18–19, 21, 27, 53–6, 58,
expected, 40, 42 61–2
institutional, 4, 46, 94, 150, 157, field-based, 19, 27, 62–3
194, 224, 228 system, 19, 27, 55, 151, 154, 199,
research, 199 238–9, 243
teaching, 198 Iberoamerican Network for Quality
see also performances Assurance in Higher Education
performances, 57–9, 61–2 (RIACES), 168, 176
Performance Indicators of School risk, 6, 24–5, 46–7, 83, 88–9, 94, 95–7,
Achievement (PISA), 28, 72–3, 99–02, 129–30, 141, 145–6,
76, 213, 238–9 172–3, 240–1, 245–6
(-based), 5, 25, 88–9, 92–95,
Quality Assurance Agency (QAA), 100–02, 107–08, 110–11,
6, 22–5, 88–90, 92–7, 100–2, 113, 115, 141, 222, 236,
106–15, 119, 140, 222, 240 239–40
quality code, 95 risk analysis, 3, 32
Quality Enhancement Framework assessment, 25, 246
(QEF), 24, 117–23, 126–30, 242 factoring, 4, 33
qualification framework, 69, 70, 140, management, 1, 3–6, 24–5,
245 27–8, 41, 89, 94–8, 100–02,
quality 106, 236, 239, 240–41, 243,
assessment, 1, 3, 8, 13–18, 23, 25, 248
27, 38–9, 42, 90, 93, 181–9, Russell Group, 93, 94
Index 255

skills, 21, 26, 34, 43, 66–8, 145, 157, Technical Advisory Group (TAG), 78,
164, 193, 197–9, 208 80–4
discipline-related, 4, 21, 77 transparency, 4, 19, 53–7, 68–9, 73,
evaluating, 4, 20, 67 90, 113–14, 145, 151, 193, 197,
generic, 4, 21, 77–9, 81–3 209, 224, 228, 237–8, 246–7
intellectual, 155 tools 1, 18, 27, 55, 57, 211, 214,
outcomes, 20, 74 224, 226, 238, 246
problem-solving, 20, 74 trust, 3, 13–15, 27–8, 90–1, 93, 98
usable, 54 100–2, 135, 142–3, 163–4, 193,
stakeholder(s), 21, 26, 54–5, 58, 74–5, 208–9, 224, 233–6, 244, 246–7
77, 82–3, 91, 98, 118, 128, 172, loss of 3, 13, 22, 25, 27, 89, 181–2, 233
218–20, 222, 227–8 in institutions 14, 15, 22, 25, 89,
external, 16, 144, 163, 169 182, 199, 236, 240–1
internal, 169, 174–5 trust-based 150, 237
involvement, 9, 144, 214, 219, 228 Tuning, 21, 77
multi, 59
model, 220, 242–3 U-Map, 1, 3, 4, 19, 27, 53, 55–9, 61–4,
students, 1–3, 17, 25–6, 57, 75, 83–4, 212, 238, 246
89–91, 99, 100, 108–11, 113–15, U-Multirank, 1, 3, 4, 19, 53, 55–58,
118, 122–23, 126–27, 140–41, 61–4, 212, 238, 246
152–55, 157–58, 161–63, 198, US, 3, 21, 149, 151–5, 162, 235, 237,
207–11, 213–14, 224–25, 238–43 241, 244
capabilities, 4, 20, 67 accreditation, 6, 7, 149, 151, 154–5,
clients, 15, 17 158
learning experience, 27, 152, 242 accreditation agencies, 26
learning outcomes, 72, 77, 79
needs, 8, 248 value-added, 21, 77, 80
prospective, 54, 58, 61, 63
skills, 4, 68, 157, 198 West European Student Information
unions, 8, 9, 26, 109, 207, 209–10, Bureau (WESIB), 207
238–9 White paper, 5, 25, 88–90, 92–4, 96,
views, 8, 213, 214, 243 107, 110, 222, 240

You might also like