Download as pdf or txt
Download as pdf or txt
You are on page 1of 36

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/325475593

Bridging theory and practice of impact evaluation of quality management in


higher education institutions: a SWOT analysis

Article in European Journal of Higher Education · May 2018


DOI: 10.1080/21568235.2018.1474782

CITATIONS READS

49 6,991

3 authors, including:

Theodor Leiber
Universität Augsburg
138 PUBLICATIONS 765 CITATIONS

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

IMPALA (Impact Analysis of (External) Quality Assurance in Higher Education) View project

LTSHE (Learning and Teaching Space in Higher Education) View project

All content following this page was uploaded by Theodor Leiber on 11 July 2019.

The user has requested enhancement of the downloaded file.


To be cited as: Leiber, T., Stensaker, B. & Harvey, L. (2018) European Journal of
Higher Education 8(3), pp. 351-365

Bridging theory and practice of impact evaluation of quality


management in higher education institutions: a SWOT analysis

Theodor Leiber

Evaluationsagentur Baden-Württemberg, M7 9a-10, 68161 Mannheim, Germany,


leiber@evalag.de

Theodor Leiber is Scientific Advisor and higher education researcher with Evaluation Agency
Baden-Wuerttemberg, Mannheim (Germany) and Associate Professor of Philosophy at
University of Augsburg (Germany). He earned doctoral degrees in Theoretical Physics and
Philosophy. His research focuses on evaluation, impact studies, performance measurement and
governance, quality management and organisational development in higher education as well as
philosophy of science and ethics.

Bjørn Stensaker

Department of Education, University of Oslo, Sem Sælands vei 7, Helga Engs hus 0317
Oslo, Norway, bjorn.stensaker@iped.uio.no

Bjørn Stensaker is Professor at the Department of Education and at the Centre for Learning and
Education at University of Oslo. He is also a Research Professor at the Nordic Institute for
Innovation, Research and Education (NIFU). His professional expertise is in institutional
theory, qualitative methods, evaluation theory and evaluation research, organisational theory
and analysis. He is the President of The European Higher Education Society (EAIR).

Lee Harvey

Birmingham, United Kingdom, leecolinharvey@gmail.com

Lee Harvey is Professor Emeritus and former Director of the Centre for Research and
Evaluation at Sheffield Hallam University (UK). He has been involved in researching higher
education policy since the late 1980s and is an acknowledged international expert, inter alia, on
issues of quality, quality assurance and its impact, employability, and student feedback. Lee is
the editor of the international journal Quality in Higher Education.

1
Bridging theory and practice of impact evaluation of quality
management in higher education institutions: a SWOT analysis

Abstract

The last two decades have witnessed an increasing intensity of quality


management in higher education institutions and quality discourses which were
followed by debates about and attempts of evaluating the efficacy of quality
management in the sector. Accordingly, the article presents a SWOT analysis of
impact evaluation of quality management in higher education institutions. The
analysis is based on a contemporary SWOT conceptualization and on reflections
of impact evaluation, ranging from theoretical models through case studies to
practical experience in a European research project. The analysis reveals that
certain weaknesses can be overcome (e.g. budget and process time restrictions)
while others cannot (e.g. systematic limitations of methodologies). Similarly,
certain threats can be tackled (e.g. proper implementation of methodologies)
while others at most can only approximately be solved (e.g. attribution problem).
The article concludes that a SWOT analysis may be a tool for bridging the gap
between methodological challenges and the implementation of impact
measurement in systematic quality management.

Keywords: before-after comparison; ex-post analysis; higher education; impact


evaluation; quality management; SWOT analysis

Introduction

The last two decades have seen an ever-increasing number and variety of used models,

procedures and instruments of quality management and quality assurance (QA) in

higher education institutions (HEIs). Accompanying this was an intensive and

continuous discourse about higher education (HE) quality including “progress, pitfalls

and promise” (Beerkens 2018) for the future of quality as well as QA (cf. Brennan

2012; Harvey and Green 1993; Harvey and Williams 2010a; 2010b; Newton 2013;

Rosa, Sarrico, and Amaral 2012; Stensaker 2008). Because of this, and as expected,

2
impact evaluation of QA in HEIs was (and still is) playing an increasingly important,

though critically reflected role (cf. Harvey and Williams 2010b, 102; Liu, Tan, and

Meng 2015; Newton 2013, 9; 11, 13; Shah 2012, 761, 770; Stensaker et al. 2011;

Suchanek et al. 2012). A recent European project attempted to make one step forward

and overcome restrictions of previous analyses to ex-post scenarios and the inclusion of

only selected QA stakeholders (mostly peer experts) (IMPALA 2016). This approach

focused on longitudinal case studies and also included relevant stakeholder groups other

than peer experts. Accordingly, the impact evaluation methodology was concentrated on

a before-after comparison design. Theoretical approaches and interpretations,

methodologies and empirical results related to the four case studies of this project were

presented and discussed in several publications (see e.g., Bejan et al. 2015; 2018;

Damian, Grifoll, and Rigbers 2015; ICP 2016; Jurvelin, Kajaste, and Malinen 2018;

Kajaste, Prades, and Scheuthle 2015; Leiber 2016; 2018; Leiber, Moutafidou, and

Welker 2018; Leiber, Prades, and Álvarez 2018; Leiber, Stensaker, and Harvey 2015).

In view of these recent efforts to further contribute to impact evaluation of QA

in HEIs with European case studies and methodological clarifications and

improvements, this article is intended as a meta-analysis of how impact evaluation of

QA can be better facilitated, and provides a roadmap as to how such facilitation can be

accomplished. The latter is done by means of a SWOT (strengths, weaknesses,

opportunities and threats) analysis (Piercy and Giles 1989; Helms and Nixon 2010),

which through a comprehensive assessment of the methodologies and results of the

above-mentioned European project on impact evaluation of QA in HEIs can inform

good practice as to how impact evaluation of QA in HEIs in general can be stimulated.

Therefore, the article is organised as follows: Firstly, the methodology of SWOT

analysis is characterised including the Telescopic Observations Matrix (TOM), which is

3
prepared for SWOT analysis of impact evaluation methodology, and the strategy matrix

for strengths, weaknesses, opportunities and threats identified by TOM. Secondly,

strengths, weaknesses, opportunities and threats of impact evaluation methodology are

presented and discussed. The article closes with considerations about present and future

of impact evaluation of quality management in HEIs.

The relevance of SWOT analysis in quality assurance: a means to close the


gap between theory and practice

A SWOT (strengths, weaknesses, opportunities and threats) analysis identifies the

internal and external factors that can affect the investigated process or structure, which,

in the present case, is impact evaluation of quality management in complex social

organisations, in particular HEIs. The field of QA can be argued to be very relevant for

the application of a SWOT analysis due to the many actors involved, both in the higher

education sector in general and inside HEIs in particular, making the design and

implementation of QA a very complex issue. A SWOT analysis may help to reduce this

complexity and provide a more simplified picture concerning the actors and dilemmas

involved and the key choices to make.

The strengths and weaknesses of the examined process or structure are, by

definition, primarily internal factors, the threats and opportunities are primarily external

factors, which exist independently of the analysed issue (Figure). The main goal of a

SWOT analysis is to establish a systematic assessment of the issue which would support

decision-making related to strategic dimensions of the issue. In the present case this

amounts to assessing theory, methodology and application in practice of impact

evaluation of QA in HEIs by identifying corresponding strengths, weaknesses,

opportunities and threats.

4
Figure: about here

The core utility of a SWOT analysis is to help to build on strengths, minimise

weaknesses, seize opportunities and counteract threats. Thus, a SWOT analysis is often

part of strategic planning by informing strategic decisions, but it does not necessarily or

automatically offer solutions. In other words:

A SWOT analysis is a structured assessment method which evaluates the strengths


(S), weaknesses (W), opportunities (O) and threats (T) involved in a process or a
structure in the most general sense of these terms. A SWOT analysis involves
specifying the objectives of the process or structure, identifying the internal and
external influences with regard to the degree of achievement of these objectives
and, finally the core element, characterising the strengths, weaknesses,
opportunities and threats of the process or structure under scrutiny. In general, a
SWOT analysis can help developing the assessed entities for further rounds of
improved goal achievement, and it usually has an exploratory dimension bringing
to the fore aspects which have not been noticed by other means of analysis. This
exploratory force originates from the requirement to identify and distinguish
explicitly the four different categorisation dimensions of processes or structures.
(Leiber 2017, 290)

When carrying out a SWOT analysis it should be clarified what its benefits but also

what its limitations are. Among the former are that conducting a SWOT analysis has

little cost, and it concentrates on the most important factors affecting the investigated

issue. However, a SWOT analysis cannot replace more in-depth research and analysis

and its execution becomes complicated if factors are uncertain or two-sided with respect

to the four factor types of strengths, weaknesses, opportunities and threats; particularly,

the boundaries between classes are often fluent, ambiguous or fuzzy. Further limitations

are that a SWOT analysis may not prioritise issues; may not be empirically validated;

may use unclear and ambiguous words and phrases; may not provide solutions or offer

alternative decisions; may generate too many ideas but not help to choose which one is

5
the best; may produce a lot of information but not all of it is useful; may lack links to an

implementation phase (Bell and Rochford 2016; Helms and Nixon 2010, 234ff.; Hill

and Westbrook 1997; Panagiotou and van Wijnen 2005, 157f.).

Not least because of these criticisms, several modified frameworks to improve

SWOT for generating valuable outcomes and better supporting the planning process

have been devised. One of these is the Telescopic Observations (TO) framework which

has been first conceptualised in 1999. According to George Panagiotou,

The TO strategic framework invites decision makers to be more systematic and


coherent in their organisational environmental appraisal, in relation to current
available methods, by being more inclusive and directing focus on the important
areas that need to be addressed. […] The framework consists of two matrices […]
and works like a funnel, where information is gathered and filtered out by the user
according to needs and requirements. (Panagiotou 2003, 9)

In the first step, the Telescopic Observations Matrix (TOM) is established where the

strengths, weaknesses, opportunities and threats are gathered while being ordered

according to relevant areas of the SWOT analysis issue (Table 1). In the case of a

business company such areas may be Technological Advancements (T), Economic

Considerations (E), Legal and Regulatory Requirements (L), Ecological and

Environmental Issues (E) etc. pp. (Panagiotou and van Wijnen 2005, 162-163). The

success of the whole approach crucially depends on the identification of the relevant

areas of the SWOT analysis issue which set up the TOM columns.

Table 1: about here

Based on an earlier study (Leiber, Stensaker, and Harvey 2015, 296ff.), the TOM areas

for the present case of SWOT analysis of impact evaluation methodology are chosen to

be before-after comparison and ex-post analysis with their respective sub-specifications

6
of assessment by participants, analysis of documents and data, counterfactual self-

estimation and causal social mechanisms (Table 2).

Table 2: about here

It is implicit in Table 2 that due to the high diversity and dynamical nature of HEIs,

experimental and (quasi-experimental) control group approaches are not feasible for

impact evaluation of QA in HEIs (ICP 2016, 12). Furthermore, it is also worthwhile to

mention that the list of TOM areas in Table 2 is not necessarily complete. However, the

TOM area “Analysis of documents and data” may virtually contain almost any possible

methodological variant which may be less rigorous or amalgamating various

information sources (such as, for example, more narrative approaches or persuasive

policy rhetoric, see e.g. Beerkens 2018), because any efficacy analysis, even in its

softest form, cannot avoid referring to some data and documents. Moreover, the TOM

area “Participants’ assessments” may, for example, also comprise approaches such as

Karl E. Weick’s ‘sense-making’ perspective which understands efficacy as a cognitive

and social construction and ascription by the actors (Ditzel 2017); or conceptualisations

of ‘perceived effectiveness’ (Seyfried and Pohlenz 2018; Rosa, Sarrico, and Amaral

2012); or ‘participatory’ and ‘case-based approaches’ (Gates and Dyson 2017, 39) as

they were applied in the already mentioned European project on impact evaluation of

QA in HEIs of which the empirical results are presented in this special issue (Bejan et

al. 2018; Jurvelin, Kajaste, and Malinen 2018; Leiber, Moutafidou, and Welker 2018;

Leiber, Prades, and Álvarez 2018).

In a second step, once the TOM has been established, the identified strengths,

weaknesses, opportunities and threats of each TOM area can be filled into a TOM area-

specific strategy matrix which aims at utilising strengths to overcome weaknesses,

exploit opportunities and avoid threats (Table 3). It is obvious that the SWOT items to

7
be filled into Table 3 must be subjected to careful consideration and reflection based on

their comparative importance. In other words, the strengths, weaknesses, opportunities

and threats, which will be identified as in Table 2, should be prioritised according to

importance and relevant quality requirements (e.g. organisational efficiency;

methodological potency). The strengths, weaknesses, opportunities and threats ranked

in this way are then transferred to the strategy matrix, where the ranked strengths are

used to formulate strategies to overcome weaknesses, take advantage of opportunities,

and avoid threats (Table 3).

Table 3: about here

The remainder of the article shall work out the TOMs for the different impact evaluation

methodologies as given in Table 2, and develop the corresponding TOM area-specific

strategy matrices following the example of Table 3.

SWOTs of impact evaluation methodology

General remarks and background

As already mentioned, in the present context the intended SWOT analysis is about the

methodology, applicability and goal setting of impact evaluation of QA in HEIs. In this

situation, the strategy and goal of a SWOT analysis is rather clear (and not as vague as

in many other cases where SWOT analysis is applied to support organisational

strategies). To put it in different terms, the present SWOT analysis shall contribute to

identify the most reliable methodology and methodological elements of impact

evaluation of QA in HEIs as well as suggestions for tackling the weaknesses,

opportunities and threats through taking advantage of the strengths.

It should also be noted at the outset that the following SWOT analysis is not

lacking theoretical foundation and empirical basis (as it is often the case) because, inter

8
alia, it is grounded in the experience of the European impact evaluation project

discussed in this special issue including empirical data as well as reflection of the

theoretical background of impact evaluation of QA in HEIs (Bejan et al. 2018; ICP

2016; Jurvelin, Kajaste and Malinen 2018; Leiber 2018; Leiber, Moutafidou and Welker

2018; Leiber, Prades, and Álvarez, 2018; Leiber, Stensaker, and Harvey 2015).

Before-after comparison

The main methodological strength of the before-after comparison approach (Leiber

2018) is that a complete dependence on ex-post available data can be avoided. In

particular, typical memorisation problems of involved persons and other time-lag

problems, which occur in ex-post approaches, can be eliminated in pure before-after

comparison. The core weakness of the approach is that, in contradistinction to

experiments or comparison/control group approaches, no explicit causal counterfactual

is available (Leiber, Stensaker, and Harvey 2015, 295). Two prominent opportunities

are that causal mechanism hypotheses and analytical models can be applied and dense

longitudinal analyses can be carried out, for example by implementing several ‘midline’

studies in between the baseline and the endline to generate a dense series of data that

produces a dense argument of change.

The five main threats of before-after comparison are: (i) the proper

implementation of the methodology: for example, the time schedule of the baseline,

midline and endline surveys must be properly organised and correlated with QA

interventions so that observable intervention effects can occur at all in between the

various surveys; another threat is that if the sequence of various surveys is too dense,

information yield as well as response rates will usually decrease; (ii) the ubiquitous

attribution problem: it asks which effects are caused by a certain (QA) intervention (and

not by other causes) (Leiber 2018, [insert page number]f.); (iii) fluctuating respondent

9
groups: ideally, in a longitudinal study the same respondents should be approached in

the different timelines: it is therefore a threat of before-after comparison when applied

in HEIs that typical universities’ stakeholder groups are largely fluctuating over

relatively short time intervals, and in practice it may be sometimes hard to guarantee

that exactly the same respondents take part in different timelines; (iv) the omnipresent

question of affordable expenditure with respect to workforce, time and money: for

example, in general systematic longitudinal survey studies are more expensive than ad-

hoc and eclectic ex-post studies; (v) the ubiquitous problem of possible dependence of

impact evaluation on stakeholders’ biases, e.g., HEI teaching staff, HEI leadership, HE

politics, QA agencies that can all violate impartiality when carrying out impact analyses

because of their distinct interests and different positions of power (see Table 4a).

Table 4a: about here

In the present context, the methodological strength of before-after comparison can be

utilised as formulated in Table 4b to take advantage of the two opportunities mentioned

in Table 4a and to approach the second threat mentioned in Table 4a, while the other

four threats must be tackled by other means such as proper impact evaluation design

(1.), controlled choice of respondents (3.), expenditure/benefit analysis (4.) and

reflected check of impartiality (5.).

Table 4b: about here

Ex-post analysis

The main strength of ex-post analysis is that it is applicable without specific

methodological preparation and design: it is just applied ex post factum. This makes ex-

post analysis indispensable in practice although it is not the approach which is, from a

purely methodological point of view, the most advanced and rigorous. The main

10
weakness of ex-post analysis is the relegation to ex-post available data and its

corresponding memorisation problems of involved persons and other time-lag problems

(Table 5a).

Table 5a: about here

The methodological strength of ex-post analysis can be used as formulated in Table 5b

to approach the first threat mentioned in Table 5a, while the other three threats must be

tackled by other means (see above). Rigorously speaking, these three “threats” –

fluctuating respondent groups; expenditure with respect to workforce, time and money;

dependence on stakeholders’ biases – are not methodological threats specific to ex-post

analysis impact evaluation but rather to any evaluation in dynamic social organisations.

Table 5b: about here

Change assessment by participants and analysis of documents and data

Reflective analysis of (change) assessments by participants and analysis of documents

and data in before-after and ex-post approaches of impact evaluation reveals that a main

methodological strength is the usability of surveys of various types, such as

standardised (online) surveys, which allow for complete acquisition of target groups,

and (semi-)structured and intensive interviews. Another strength consists in including

participant observation which can incorporate original views into practice that may not

be achievable by other methodologies. Finally, there is the option of document analysis.

A major, non-trivial and omnipresent threat is that the survey and interview

instruments must be qualitatively adapted to social, organisational and cognitive

contexts of potential respondents. A further threat consists in ensuring the

representativeness of respondent groups as well as high response rates (Table 6a).

11
Table 6a: about here

Again, the methodological strengths of change assessments by participants and analysis

of documents and data can be utilised as formulated in Table 6b to approach the first

threat mentioned in Table 6a. The other threats, however, must be tackled by other

means such as exploration and analysis of the various respondents’ differing framework

conditions (e.g. engagement level in quality management; impact knowledge;

governance power; etc.) and pre-test of survey instruments (2.), if necessary, estimation

of respondents’ representativeness and implementing measures for motivating survey

participants (e.g. keeping impact evaluation transparent and participative; setting

incentives) (3.), controlled choice of respondents (4.), expenditure/benefit analysis (5.)

and reflected check of impartiality (6.).

Table 6b: about here

Counterfactual self-estimation

The big advantage of the impact evaluation methodology of counterfactual self-

estimation (Mueller, Gaus, and Rech 2014) is its reference to a counterfactual statement

(“What would have happened, had the cause event not appeared?”) which is not

available in a before-after comparison and other ex-post approaches. This plus is

balanced by the restriction of the approach to one’s own intentional states (“What would

have happened to my attitude towards (my assessment of) a certain examined feature,

had the cause event not appeared?”) and the threats that memorisation problems and

deficits in self-analysis of intentional states could negatively affect the self-estimation

(Table 7a).

Table 7a: about here

The methodological strength of counterfactual self-estimation can be made use of as

12
formulated in Table 7b to approach the first threat mentioned in Table 7a, while the

other threats can only be treated by other means (e.g. memory training and protocols to

come to grips with memorisation problems).

Table 7b: about here

Causal social mechanisms

As far as explanatory potential and power are concerned, causal social mechanisms

(Groff 2017; Little 2015; Steel 2011), together with experiment and random control

trials (with-without comparison), are among the three stronger methods to approach,

and probably solve, the attribution problem of impact evaluation. Of course, then the

major and less than trivial challenge consists in identifying cause-effect mechanisms

which do the attribution job. If this approach is successful, its major strength will be

manifest: the explanation of QA effects by causal mechanisms instead of law-like

relations or statistical correlations (Table 8a).

Table 8a: about here

The methodological strength of the causal social mechanism approach can be utilised as

formulated in Table 8b to tackle or, as a rule, solve the attribution problem mentioned in

Table 8a, while the further threats can only be handled by other means.

Table 8b: about here

Impact evaluation of quality management in higher education institutions:


the need for a coherent approach to quality assurance

It is already a truism, that impact evaluation of QA in HEIs is indispensable because

evidence-based governance and QA are programmatic necessities in HEIs which

represent a crucial sector and innovative power in modern education societies and

13
knowledge economies. It can be considered equally established that impact evaluation

(of QA in HEIs) cannot be reduced to one puristic strand of methodology such as

randomized control group comparison (which has much fewer successful applications

than often insinuated), before-after comparison, and ex-post analysis, or causal social

mechanisms and participatory efficacy assessments. Instead, comprehensive impact

evaluation will usually have to rely on a mix of methodologies which, due to their

complementarity (and shared complexity), can supplement each other (see also the

articles in this special issue). In other words, this means that (as is true for many, if not

most, complex undertakings):

In doing impact evaluations, there is no ‘gold standard’ (in the sense of a single
method that is best in all cases). However, depending on factors such as the scope,
objectives, and design of the intervention, as well as data availability, some
methods can be better than others in specific cases. (Leeuw and Vaessen 2009, xiii;
see also White 2010; Norgbey 2016)

In addition to a methodological mix, mixed methods (in the scholarly sense) would

generally also increase the validity and reliability of results. In particular, for impact

analyses an integrated mix of quantitative (and qualitative) data from questionnaire

surveys and qualitative data from structured interviews is required to combine more

generic and less exploratory information on the one side with more individual-case and

more exploratory information on the other side.

A further general experience and conclusion, which can be drawn from the

contributions to this special issue and from the above SWOT analysis, is that there are

usually no easy solutions to the more profound weaknesses and threats of

methodologies and practice of impact evaluation, i.e., these are unavoidably with us to a

certain extent, and HE researchers as well as practitioners have to deal with them

discoursively. In particular, the above SWOT analysis provides helpful conceptual

14
differentiation, reveals the resistive complexity of the topic and can help to identify

strategies to tackle weaknesses and threats. It also shows that certain methodological

and pragmatic weaknesses can be overcome (e.g. restrictions of workforce, budget and

process time) while basic systematic limitations of methodologies cannot (e.g.

unavailability of causal counterfactual; dependence on ex-post available data; restriction

of analysis to own intentional states). Similarly, certain threats of impact evaluation of

QA in HEIs can be avoided or solved (e.g. proper implementation of methodologies;

dependence on stakeholders’ biases) while others cannot or can only approximately be

solved (e.g. attribution problem; fluctuating respondents’ groups in HEIs).

Another general thing is that for impact evaluation of QA in HEIs it is

challenging to achieve metric data which would be interval scaled. Due to the

complexity, dynamics, and distinct diversity of HEIs and HE systems, it is also hard, if

not impossible in practice, to do more than pronounced (qualitative) case studies (Byrne

2013; Yin 2013), while case study overarching comparability is difficult to achieve. In

that sense, at present the challenge is that the methodology and case study approach

suggested in the above-mentioned impact evaluation project is further applied, tested

and possibly improved. Corresponding recommendations and general success criteria

have already been given (Leiber, Stensaker, and Harvey 2015, 308) and can be

supplemented by reflective use of the results of the above SWOT analysis, including

general methodological considerations (Leiber 2018).

The key argument in the current article is still that impact evaluation of QA is in

need of coherent design and organization, and that a SWOT approach may be a useful

tool to bridge the gap between i) the technical and methodological challenges of impact

measurement, and the organizational and practical challenges involved with respect to

its implementation and ii) the design and implementation of a QA system and how we

15
can assess the impact of such a system over time. As suggested in this article, impact

assessment involves a range of methodological, organizational, managerial and political

issues which need to be negotiated and decided upon between the many stakeholders

involved.

It is in this negotiation process that the SWOT approach sketched out in the

current article may be a relevant tool – both at national level and institutional level. The

relevance of a SWOT approach at national level addresses the linkages between the

design characteristics and key elements of QA at national level, and the implications

such design elements have for impact evaluation. To exemplify, while accreditations of

institutions and educational offerings have become a standard procedure in many

European countries, relative little attention has been given to the data these accreditation

procedures produce and how they can be utilized in a more longitudinal design for

measuring impact over time. As valid impact measurement is so dependent on adequate

data, there is a need to reflect upon how relevant data can be generated through

carefully thought out national QA designs. For institutions, designing for impact

measurement also implies that balancing national accountability demands with local

strategic ambitions will be a key dilemma where a SWOT approach may be useful.

While most actors in the sector would like to see impact of QA to be measured, there is

a potential price to be paid if such impact designs become too costly to administer, too

bureaucratic to run, and too complex to make sense of. The mentioned European impact

evaluation project and the various articles of this special issue have provided some

insights into how we can come to grips with these challenges for the future.

Acknowledgement

Theodor Leiber and Bjørn Stensaker did part of the work on this article in the context of a
project on impact evaluation of quality management in higher education institutions, which was

16
co-funded by the European Commission (Grant no.: 539481-LLP-1-2013-1-DE-ERASMUS-
EIGF). Lee Harvey thanks the organisers of that project for inviting him to the International
Final Conference “Impact Evaluation of Quality Management in Higher Education. A
Contribution to Sustainable Quality Development of the Knowledge Society” which was held
on 16-17 June 2016, in Barcelona, Spain. This publication reflects the views only of the authors,
and the European Commission cannot be held responsible for any use which may be made of
the information contained therein.

Disclosure statement

No potential conflict of interest was reported by the authors.

References
Beerkens, Maarja. 2018. “Evidence-based policy and higher education quality
assurance: progress, pitfalls and promise.” European Journal of Higher
Education 8(3): [insert page numbers].
Bejan, Andrei-Stelian, Tero Janatuinen, Jouni Jurvelin, Susanne Klöpping, Heikki
Malinen, Bernhard Minke, and Radu Vacareanu. 2015. “Quality assurance and
its impact from higher education institutions’ perspectives: methodological
approaches, experiences and expectations.” Quality in Higher Education 21(3):
343-371.
Bejan, Andrei-Stelian, Radu Mircea Damian, Theodor Leiber, Iohan Neuner, Lidia
Niculita, and Radu Vacareanu. 2018. “Impact evaluation of institutional
evaluation and programme accreditation at Technical University of Civil
Engineering Bucharest (Romania).” European Journal of Higher Education
8(3): [insert page numbers].
Bell, Geoffrey G., and Linda Rochford. 2016. “Rediscovering SWOT’s integrative
nature: a new understanding of an old framework.” The International Journal of
Management Education 14(3): 310-326.
Brennan, John. 2012. “Talking about quality. The changing uses and impact of quality
assurance.” 11 pages. http://www.qaa.ac.uk/en/Publications/Documents/impact-
of-quality-assurance.pdf (accessed 24 April 2018).
Byrne, David. 2013. “Evaluating complex social interventions in a complex world.”
Evaluation 19(3): 217-228.

17
Damian, Radu, Josep Grifoll, and Anke Rigbers. 2015. “On the role of impact
evaluation of quality assurance from the strategic perspective of quality
assurance agencies in the European Higher Education Area.” Quality in Higher
Education 21(3): 251-269.
Ditzel, Benjamin. 2017. “Bedingte Wirksamkeit von QM in Studium und Lehre:
Ergebnisse einer Delphi-Studie” [Limited effectiveness of quality management
in teaching and learning: results of a Delphi study]. Zeitschrift für
Hochschulentwicklung 12(3): 17-37.
Gates, Emily, and Lisa Dyson. 2017. “Implications of the changing conversation about
causality for evaluators.” American Journal of Evaluation 38(1): 29-46.
Groff, Ruth. 2017. “Causal mechanisms and the philosophy of causation.” Journal for
the Theory of Social Behaviour 47(3): 286-305.
Harvey, Lee, and Diana Green. 1993. “Defining quality.” Assessment and Evaluation in
Higher Education 18(1): 9-34.
Harvey, Lee, and James Williams. 2010a. “Editorial: fifteen years of Quality in Higher
Education. Part One.” Quality in Higher Education 16(1): 3-36.
Harvey, Lee, and James Williams. 2010b. “Editorial: fifteen years of Quality in Higher
Education. Part Two.” Quality in Higher Education 16(2): 81-113.
Helms, Marilyn M., and Judy Nixon. 2010. “Exploring SWOT analysis – where are we
now? A review of academic research from the last decade.” Journal of Strategy
and Management 3(3): 215-251.
Hill, Terry, and Roy Westbrook. 1997. “SWOT analysis: it’s time for a product recall.”
Long Range Planning 30(1): 46-52.
ICP [IMPALA Consortium Partners]. 2016. Impact Evaluation of Quality Assurance in
Higher Education. A Manual. 41 p.
https://www.evalag.de/fileadmin/dateien/pdf/forschung_international/impala/im
pala_manual_161212_v4.pdf (accessed 24 April 2018).
IMPALA. 2016. Impact Analysis of External Quality Assurance Processes of Higher
Education Institutions. Pluralistic Methodology and Application of a Formative
Transdisciplinary Impact Evaluation.
https://www.evalag.de/en/international/impact-analysis/the-project/ (accessed 24
April 2018).

18
Jurvelin, Jouni, Matti Kajaste, and Heikki Malinen. 2018. “Impact evaluation of EUR-
ACE programme accreditation at Jyväskylä University of Applied Sciences
(Finland).” European Journal of Higher Education 8(3): [insert page numbers].
Kajaste, Matti, Anna Prades, and Harald Scheuthle. 2015. “Impact evaluation from
quality assurance agencies’ perspectives: methodological approaches,
experiences and expectations.” Quality in Higher Education 21(3): 270-287.
Leeuw, Frans, and Jos Vaessen. 2009. Impact Evaluations and Development. NONIE
Guidance on Impact Evaluation. Washington: The Network of Networks on
Impact Evaluation (NONIE).
Leiber, Theodor. 2016. “Impact evaluation of quality management in higher education.
A contribution to sustainable quality development of the knowledge and
learning society.” Qualität in der Wissenschaft 10(1): 3-12.
Leiber, Theodor. 2017. “Computational social science and Big Data: a quick SWOT
analysis.” In Berechenbarkeit der Welt? Philosophie und Wissenschaft im
Zeitalter von Big Data [Computability of the World? Philosophy and Science in
the Age of Big Data], edited by Joerg Wernecke, Wolfgang Pietsch, and
Maximilian Ott, 287-301. Berlin: Springer.
Leiber, Theodor. 2018. “Impact evaluation of quality management in higher education:
a contribution to sustainable quality development in knowledge societies.”
European Journal of Higher Education 8(3): [insert page numbers].
Leiber, Theodor, Nana Moutafidou, and Bertram Welker. 2018. “Impact evaluation of
Programme Review at University of Stuttgart (Germany).” European Journal of
Higher Education 8(3): [insert page numbers].
Leiber, Theodor, Anna Prades, and Mari Paz Álvarez. 2018. “Impact evaluation of
programme accreditation at Autonomous University of Barcelona (Spain).”
European Journal of Higher Education 8(3): [insert page numbers].
Leiber, Theodor, Bjørn Stensaker, and Lee Harvey. 2015. “Impact evaluation of quality
assurance in higher education: methodology and causal designs.” Quality in
Higher Education 21(3): 288-311.
Little, Daniel. 2015. “Guiding and modeling quality improvement in higher education
institutions.” Quality in Higher Education 21(3): 312-327.
Liu, Shuiyun, Minda Tan, and Zhaorui Meng. 2015. “Impact of quality assurance on
higher education institutions: a literature review.” Higher Education Evaluation
and Development 9(2): 17-34.

19
Mueller, Christoph. E., Hansjoerg Gaus, and Joerg Rech. 2014. “The counterfactual
self-estimation of program participants: Impact assessment without control
groups or pretests.” American Journal of Evaluation 35(1): 8-25.
Newton, Jethro. 2013. “Is quality assurance leading to enhancement?” In How Does
Quality Assurance Make a Difference?, edited by Fiona Crozier et al., 8-14.
Brussels, European University Association.
Norgbey, Enyonam B. 2016. “Debate on the appropriate methods for conducting impact
evaluation of programs within the development context.” Journal of
Multidisciplinary Evaluation 12(27): 58-66.
Panagiotou, George. 2003. “Bringing SWOT into focus.” Business Strategy Review
14(2): 8-10.
Panagiotou, George, and Riëtte van Wijnen. 2005. “The ‘telescopic observations’
framework: an attainable strategic tool.” Marketing Intelligence and Planning
23(2): 155-171.
Piercy, Nigel, and William Giles. 1989. “Making SWOT analysis work.” Marketing
Intelligence and Planning 7(5/6): 5-7.
Rosa, Maria J., Claudia Sarrico, and Alberto Amaral. 2012. “Academics’ perceptions on
the purposes of quality assessment.” Quality in Higher Education 18(3): 349-
366.
Seyfried, Markus, and Philipp Pohlenz. 2018. “Assessing quality assurance in higher
education: quality managers’ perceptions of effectiveness.” European Journal of
Higher Education 8(3): [insert page numbers].
Shah, Masood. 2012. “Ten years of external quality audit in Australia: evaluating its
effectiveness and success.” Assessment and Evaluation in Higher Education
37(6): 761-772.
Steel, David. 2011. “Causality, causal models, and social mechanisms.” In The SAGE
Handbook of the Philosophy of Social Sciences, edited by Ian C. Jarvie and
Jesus Zamora-Bonilla, 288-304. Thousand Oaks, CA: Sage.
Stensaker, Bjørn. 2008. “Outcomes of quality assurance: a discussion of knowledge,
methodology and validity.” Quality in Higher Education 14(1): 3-13.
Stensaker, Bjørn, Liv Langfeldt, Lee Harvey, Jeroen Huisman, and Don F.
Westerheijden. 2011. “An in-depth study on the impact of external quality
assurance.” Assessment and Evaluation in Higher Education 36(4): 465-478.

20
Suchanek, Justine, Manuel Pietzonka, Rainer Kuenzel, and Torsten Futterer. 2012. “The
impact of accreditation on the reform of study programmes in Germany.”
Higher Education Management and Policy 24(1): 1-24.
Yin, Robert K. 2013. “Validity and generalization in future case study evaluations.”
Evaluation 19(3): 321-332.
White, Howard. 2010. “A contribution to current debates in impact evaluation.”
Evaluation 16(2): 153-164.

21
Figures and Tables

Figure. SWOT analysis, schematic representation

22
Table 1. Telescopic Observations Matrix (TOM) (Panagiotou and van Wijnen 2005,
161)
T E L E S C O P I C O B S E R V A T I O N S
Strengths
Weaknesses
Opportunities
Threats

23
Table 2. TOM for SWOT analysis of impact evaluation methodology of QA in HEIs
Before-after comparison Ex-post analysis
Partici- Analysis Causal Partici- Analysis Counter- Causal
pants’ of docu- social pants’ of docu- factual self- social
assess- ments and mecha- assess- ments and estimation mecha-
ments data nisms ments data nisms
Strengths
Weak-
nesses
Oppor-
tunities
Threats

24
Table 3. TOM area-specific strategy matrix for strengths, weaknesses, opportunities
and threats identified by TOM (after Panagiotou and van Wijnen 2005, 164)
Weaknesses (W) Opportunities (O) Threats (T)
1. 2. 3. ... 1. 2. 3. ... 1. 2. 3. ...

Strengths (S) Strengths-based Strengths-based strategies to Strengths-based


strategies to overcome take advantage of strategies to avoid
weaknesses (S/W) opportunities (S/O) threats (S/T)
1.

2.

3.

...

25
Table 4a. Methodological SWOTs of before-after comparison impact evaluation
Strengths Weaknesses
1. Avoidance of complete dependence on ex- 1. No explicit causal counterfactual available
post available data (cannot be overcome)
Opportunities Threats
1. Devising the causal network: causal social 1. Proper implementation of methodology
mechanisms; analytical models 2. Attribution problem
2. Feasibility of dense longitudinal analyses 3. Fluctuating respondent groups
4. Expenditure (workforce, time, money)
5. Dependence of impact evaluation on
stakeholders’ biases (partiality)

26
Table 4b. Strategy matrix corresponding to Table 4a
W O T
1. 1. 2. 1. 2. 3. 4. 5.
S S/W S/O S/T
1. – Use data of Carry out dense – Use hypotheses about – – –
simultaneous before- series of surveys causal network
after surveys to to establish dense mechanisms to
develop hypotheses longitudinal approach the
about causal network analyses attribution problem
mechanisms

27
Table 5a. Methodological SWOTs of ex-post analysis impact evaluation
Strengths Weaknesses
1. Applicable without specific methodological 1. No explicit causal counterfactual available
preparation (cannot be overcome)
2. Dependence on ex-post available data
(cannot be overcome)
Opportunities Threats
– 1. Attribution problem
2. Fluctuating respondent groups
3. Expenditure (workforce, time, money)
4. Dependence of impact evaluation on
stakeholders’ biases (partiality)

28
Table 5b. Strategy matrix corresponding to Table 5a
W O T
1. – 1. 2. 3. 4.
S S/W S/O S/T
1. – – Use data (correlation and causality) of QA procedures (e.g. – – –
evaluation recommendations, follow-up activities, and re-
evaluations) to approach the attribution problem

29
Table 6a. Methodological SWOTs in impact evaluation referring to (change)
assessments by participants and analysis of documents and data (before-after & ex-post)
Strengths Weaknesses
1. Surveys of various type 1. No explicit causal counterfactual available
2. Participant observation (e.g. in status (cannot be overcome)
seminars and workshops) 2. Ex-post: dependence on ex-post available
3. Analysis of documents and data data (cannot be overcome)
Opportunities Threats
– 1. Attribution problem
2. Adaptation of survey instruments to
respondents’ context
3. Ensure representativeness of respondent
groups and high response rates
4. Fluctuating respondent groups
5. Expenditure (workforce, time, money)
6. Dependence of impact evaluation on
stakeholders’ biases (partiality)

30
Table 6b. Strategy matrix corresponding to Table 6a
W O T
1. 2. – 1. 2. 3. 4. 5. 6.
S S/W S/O S/T
1. – – – Use data (correlation and causality) – from – – – – –
surveys, participant observation, document
2. – – – analysis – of QA procedures (e.g. evaluation – – – – –
3. – – – recommendations, follow-up activities, and re- – – – – –
evaluations; participatory efficacy assessments)
to approach the attribution problem

31
Table 7a. Methodological SWOTs in impact evaluation referring to counterfactual self-
estimation (before-after & ex-post)
Strengths Weaknesses
1. Counterfactual available 1. Restriction to own intentional states
(cannot be overcome)
Opportunities Threats
– 1. Attribution problem
2. Memorisation problems
3. Deficits in self-analysis of intentional states
4. Fluctuating respondent groups
5. Expenditure (workforce, time, money)
6. Dependence of impact evaluation on
stakeholders’ biases (partiality)

32
Table 7b. Strategy matrix corresponding to Table 7a
W O T
1. – 1. 2. 3. 4. 5. 6.
S S/W S/O S/T
1. – – Use counterfactual of own intentional – – – – –
states to approach attribution problem

33
Table 8a. Methodological SWOTs in impact evaluation referring to causal social
mechanisms (before-after & ex-post)
Strengths Weaknesses
1. Explanation by mechanisms –
Opportunities Threats
– 1. Attribution problem
2. Fluctuating respondent groups
3. Expenditure (workforce, time, money)
4. Dependence of impact evaluation on
stakeholders’ biases (partiality)

34
Table 8b. Strategy matrix corresponding to Table 8a
W O T
– – 1. 2. 3. 4.
S S/W S/O S/T
1. – – Use identification of cause-effect mechanisms to – – –
approach/solve attribution problem

35

View publication stats

You might also like