Professional Documents
Culture Documents
高等教育机构质量管理影响评价的衔接理论与实践: SWOT分析
高等教育机构质量管理影响评价的衔接理论与实践: SWOT分析
net/publication/325475593
CITATIONS READS
49 6,991
3 authors, including:
Theodor Leiber
Universität Augsburg
138 PUBLICATIONS 765 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
IMPALA (Impact Analysis of (External) Quality Assurance in Higher Education) View project
All content following this page was uploaded by Theodor Leiber on 11 July 2019.
Theodor Leiber
Theodor Leiber is Scientific Advisor and higher education researcher with Evaluation Agency
Baden-Wuerttemberg, Mannheim (Germany) and Associate Professor of Philosophy at
University of Augsburg (Germany). He earned doctoral degrees in Theoretical Physics and
Philosophy. His research focuses on evaluation, impact studies, performance measurement and
governance, quality management and organisational development in higher education as well as
philosophy of science and ethics.
Bjørn Stensaker
Department of Education, University of Oslo, Sem Sælands vei 7, Helga Engs hus 0317
Oslo, Norway, bjorn.stensaker@iped.uio.no
Bjørn Stensaker is Professor at the Department of Education and at the Centre for Learning and
Education at University of Oslo. He is also a Research Professor at the Nordic Institute for
Innovation, Research and Education (NIFU). His professional expertise is in institutional
theory, qualitative methods, evaluation theory and evaluation research, organisational theory
and analysis. He is the President of The European Higher Education Society (EAIR).
Lee Harvey
Lee Harvey is Professor Emeritus and former Director of the Centre for Research and
Evaluation at Sheffield Hallam University (UK). He has been involved in researching higher
education policy since the late 1980s and is an acknowledged international expert, inter alia, on
issues of quality, quality assurance and its impact, employability, and student feedback. Lee is
the editor of the international journal Quality in Higher Education.
1
Bridging theory and practice of impact evaluation of quality
management in higher education institutions: a SWOT analysis
Abstract
Introduction
The last two decades have seen an ever-increasing number and variety of used models,
continuous discourse about higher education (HE) quality including “progress, pitfalls
and promise” (Beerkens 2018) for the future of quality as well as QA (cf. Brennan
2012; Harvey and Green 1993; Harvey and Williams 2010a; 2010b; Newton 2013;
Rosa, Sarrico, and Amaral 2012; Stensaker 2008). Because of this, and as expected,
2
impact evaluation of QA in HEIs was (and still is) playing an increasingly important,
though critically reflected role (cf. Harvey and Williams 2010b, 102; Liu, Tan, and
Meng 2015; Newton 2013, 9; 11, 13; Shah 2012, 761, 770; Stensaker et al. 2011;
Suchanek et al. 2012). A recent European project attempted to make one step forward
and overcome restrictions of previous analyses to ex-post scenarios and the inclusion of
only selected QA stakeholders (mostly peer experts) (IMPALA 2016). This approach
focused on longitudinal case studies and also included relevant stakeholder groups other
than peer experts. Accordingly, the impact evaluation methodology was concentrated on
methodologies and empirical results related to the four case studies of this project were
presented and discussed in several publications (see e.g., Bejan et al. 2015; 2018;
Damian, Grifoll, and Rigbers 2015; ICP 2016; Jurvelin, Kajaste, and Malinen 2018;
Kajaste, Prades, and Scheuthle 2015; Leiber 2016; 2018; Leiber, Moutafidou, and
Welker 2018; Leiber, Prades, and Álvarez 2018; Leiber, Stensaker, and Harvey 2015).
QA can be better facilitated, and provides a roadmap as to how such facilitation can be
opportunities and threats) analysis (Piercy and Giles 1989; Helms and Nixon 2010),
3
prepared for SWOT analysis of impact evaluation methodology, and the strategy matrix
presented and discussed. The article closes with considerations about present and future
internal and external factors that can affect the investigated process or structure, which,
organisations, in particular HEIs. The field of QA can be argued to be very relevant for
the application of a SWOT analysis due to the many actors involved, both in the higher
education sector in general and inside HEIs in particular, making the design and
implementation of QA a very complex issue. A SWOT analysis may help to reduce this
complexity and provide a more simplified picture concerning the actors and dilemmas
definition, primarily internal factors, the threats and opportunities are primarily external
factors, which exist independently of the analysed issue (Figure). The main goal of a
SWOT analysis is to establish a systematic assessment of the issue which would support
decision-making related to strategic dimensions of the issue. In the present case this
4
Figure: about here
weaknesses, seize opportunities and counteract threats. Thus, a SWOT analysis is often
part of strategic planning by informing strategic decisions, but it does not necessarily or
When carrying out a SWOT analysis it should be clarified what its benefits but also
what its limitations are. Among the former are that conducting a SWOT analysis has
little cost, and it concentrates on the most important factors affecting the investigated
issue. However, a SWOT analysis cannot replace more in-depth research and analysis
and its execution becomes complicated if factors are uncertain or two-sided with respect
to the four factor types of strengths, weaknesses, opportunities and threats; particularly,
the boundaries between classes are often fluent, ambiguous or fuzzy. Further limitations
are that a SWOT analysis may not prioritise issues; may not be empirically validated;
may use unclear and ambiguous words and phrases; may not provide solutions or offer
alternative decisions; may generate too many ideas but not help to choose which one is
5
the best; may produce a lot of information but not all of it is useful; may lack links to an
implementation phase (Bell and Rochford 2016; Helms and Nixon 2010, 234ff.; Hill
SWOT for generating valuable outcomes and better supporting the planning process
have been devised. One of these is the Telescopic Observations (TO) framework which
In the first step, the Telescopic Observations Matrix (TOM) is established where the
strengths, weaknesses, opportunities and threats are gathered while being ordered
according to relevant areas of the SWOT analysis issue (Table 1). In the case of a
Environmental Issues (E) etc. pp. (Panagiotou and van Wijnen 2005, 162-163). The
success of the whole approach crucially depends on the identification of the relevant
areas of the SWOT analysis issue which set up the TOM columns.
Based on an earlier study (Leiber, Stensaker, and Harvey 2015, 296ff.), the TOM areas
for the present case of SWOT analysis of impact evaluation methodology are chosen to
6
of assessment by participants, analysis of documents and data, counterfactual self-
It is implicit in Table 2 that due to the high diversity and dynamical nature of HEIs,
experimental and (quasi-experimental) control group approaches are not feasible for
mention that the list of TOM areas in Table 2 is not necessarily complete. However, the
TOM area “Analysis of documents and data” may virtually contain almost any possible
information sources (such as, for example, more narrative approaches or persuasive
policy rhetoric, see e.g. Beerkens 2018), because any efficacy analysis, even in its
softest form, cannot avoid referring to some data and documents. Moreover, the TOM
area “Participants’ assessments” may, for example, also comprise approaches such as
and social construction and ascription by the actors (Ditzel 2017); or conceptualisations
of ‘perceived effectiveness’ (Seyfried and Pohlenz 2018; Rosa, Sarrico, and Amaral
2012); or ‘participatory’ and ‘case-based approaches’ (Gates and Dyson 2017, 39) as
they were applied in the already mentioned European project on impact evaluation of
QA in HEIs of which the empirical results are presented in this special issue (Bejan et
al. 2018; Jurvelin, Kajaste, and Malinen 2018; Leiber, Moutafidou, and Welker 2018;
In a second step, once the TOM has been established, the identified strengths,
weaknesses, opportunities and threats of each TOM area can be filled into a TOM area-
exploit opportunities and avoid threats (Table 3). It is obvious that the SWOT items to
7
be filled into Table 3 must be subjected to careful consideration and reflection based on
in this way are then transferred to the strategy matrix, where the ranked strengths are
The remainder of the article shall work out the TOMs for the different impact evaluation
As already mentioned, in the present context the intended SWOT analysis is about the
situation, the strategy and goal of a SWOT analysis is rather clear (and not as vague as
strategies). To put it in different terms, the present SWOT analysis shall contribute to
It should also be noted at the outset that the following SWOT analysis is not
lacking theoretical foundation and empirical basis (as it is often the case) because, inter
8
alia, it is grounded in the experience of the European impact evaluation project
discussed in this special issue including empirical data as well as reflection of the
2016; Jurvelin, Kajaste and Malinen 2018; Leiber 2018; Leiber, Moutafidou and Welker
2018; Leiber, Prades, and Álvarez, 2018; Leiber, Stensaker, and Harvey 2015).
Before-after comparison
is available (Leiber, Stensaker, and Harvey 2015, 295). Two prominent opportunities
are that causal mechanism hypotheses and analytical models can be applied and dense
longitudinal analyses can be carried out, for example by implementing several ‘midline’
studies in between the baseline and the endline to generate a dense series of data that
The five main threats of before-after comparison are: (i) the proper
implementation of the methodology: for example, the time schedule of the baseline,
midline and endline surveys must be properly organised and correlated with QA
interventions so that observable intervention effects can occur at all in between the
various surveys; another threat is that if the sequence of various surveys is too dense,
information yield as well as response rates will usually decrease; (ii) the ubiquitous
attribution problem: it asks which effects are caused by a certain (QA) intervention (and
not by other causes) (Leiber 2018, [insert page number]f.); (iii) fluctuating respondent
9
groups: ideally, in a longitudinal study the same respondents should be approached in
in HEIs that typical universities’ stakeholder groups are largely fluctuating over
relatively short time intervals, and in practice it may be sometimes hard to guarantee
that exactly the same respondents take part in different timelines; (iv) the omnipresent
question of affordable expenditure with respect to workforce, time and money: for
example, in general systematic longitudinal survey studies are more expensive than ad-
hoc and eclectic ex-post studies; (v) the ubiquitous problem of possible dependence of
impact evaluation on stakeholders’ biases, e.g., HEI teaching staff, HEI leadership, HE
politics, QA agencies that can all violate impartiality when carrying out impact analyses
because of their distinct interests and different positions of power (see Table 4a).
in Table 4a and to approach the second threat mentioned in Table 4a, while the other
four threats must be tackled by other means such as proper impact evaluation design
Ex-post analysis
methodological preparation and design: it is just applied ex post factum. This makes ex-
post analysis indispensable in practice although it is not the approach which is, from a
purely methodological point of view, the most advanced and rigorous. The main
10
weakness of ex-post analysis is the relegation to ex-post available data and its
(Table 5a).
to approach the first threat mentioned in Table 5a, while the other three threats must be
tackled by other means (see above). Rigorously speaking, these three “threats” –
fluctuating respondent groups; expenditure with respect to workforce, time and money;
analysis impact evaluation but rather to any evaluation in dynamic social organisations.
and data in before-after and ex-post approaches of impact evaluation reveals that a main
standardised (online) surveys, which allow for complete acquisition of target groups,
participant observation which can incorporate original views into practice that may not
A major, non-trivial and omnipresent threat is that the survey and interview
11
Table 6a: about here
of documents and data can be utilised as formulated in Table 6b to approach the first
threat mentioned in Table 6a. The other threats, however, must be tackled by other
means such as exploration and analysis of the various respondents’ differing framework
governance power; etc.) and pre-test of survey instruments (2.), if necessary, estimation
Counterfactual self-estimation
estimation (Mueller, Gaus, and Rech 2014) is its reference to a counterfactual statement
(“What would have happened, had the cause event not appeared?”) which is not
balanced by the restriction of the approach to one’s own intentional states (“What would
have happened to my attitude towards (my assessment of) a certain examined feature,
had the cause event not appeared?”) and the threats that memorisation problems and
(Table 7a).
12
formulated in Table 7b to approach the first threat mentioned in Table 7a, while the
other threats can only be treated by other means (e.g. memory training and protocols to
As far as explanatory potential and power are concerned, causal social mechanisms
(Groff 2017; Little 2015; Steel 2011), together with experiment and random control
trials (with-without comparison), are among the three stronger methods to approach,
and probably solve, the attribution problem of impact evaluation. Of course, then the
major and less than trivial challenge consists in identifying cause-effect mechanisms
which do the attribution job. If this approach is successful, its major strength will be
The methodological strength of the causal social mechanism approach can be utilised as
formulated in Table 8b to tackle or, as a rule, solve the attribution problem mentioned in
Table 8a, while the further threats can only be handled by other means.
represent a crucial sector and innovative power in modern education societies and
13
knowledge economies. It can be considered equally established that impact evaluation
randomized control group comparison (which has much fewer successful applications
than often insinuated), before-after comparison, and ex-post analysis, or causal social
evaluation will usually have to rely on a mix of methodologies which, due to their
complementarity (and shared complexity), can supplement each other (see also the
articles in this special issue). In other words, this means that (as is true for many, if not
In doing impact evaluations, there is no ‘gold standard’ (in the sense of a single
method that is best in all cases). However, depending on factors such as the scope,
objectives, and design of the intervention, as well as data availability, some
methods can be better than others in specific cases. (Leeuw and Vaessen 2009, xiii;
see also White 2010; Norgbey 2016)
In addition to a methodological mix, mixed methods (in the scholarly sense) would
generally also increase the validity and reliability of results. In particular, for impact
surveys and qualitative data from structured interviews is required to combine more
generic and less exploratory information on the one side with more individual-case and
A further general experience and conclusion, which can be drawn from the
contributions to this special issue and from the above SWOT analysis, is that there are
methodologies and practice of impact evaluation, i.e., these are unavoidably with us to a
certain extent, and HE researchers as well as practitioners have to deal with them
14
differentiation, reveals the resistive complexity of the topic and can help to identify
strategies to tackle weaknesses and threats. It also shows that certain methodological
and pragmatic weaknesses can be overcome (e.g. restrictions of workforce, budget and
challenging to achieve metric data which would be interval scaled. Due to the
complexity, dynamics, and distinct diversity of HEIs and HE systems, it is also hard, if
not impossible in practice, to do more than pronounced (qualitative) case studies (Byrne
2013; Yin 2013), while case study overarching comparability is difficult to achieve. In
that sense, at present the challenge is that the methodology and case study approach
have already been given (Leiber, Stensaker, and Harvey 2015, 308) and can be
supplemented by reflective use of the results of the above SWOT analysis, including
The key argument in the current article is still that impact evaluation of QA is in
need of coherent design and organization, and that a SWOT approach may be a useful
tool to bridge the gap between i) the technical and methodological challenges of impact
measurement, and the organizational and practical challenges involved with respect to
its implementation and ii) the design and implementation of a QA system and how we
15
can assess the impact of such a system over time. As suggested in this article, impact
issues which need to be negotiated and decided upon between the many stakeholders
involved.
It is in this negotiation process that the SWOT approach sketched out in the
current article may be a relevant tool – both at national level and institutional level. The
relevance of a SWOT approach at national level addresses the linkages between the
design characteristics and key elements of QA at national level, and the implications
such design elements have for impact evaluation. To exemplify, while accreditations of
European countries, relative little attention has been given to the data these accreditation
procedures produce and how they can be utilized in a more longitudinal design for
data, there is a need to reflect upon how relevant data can be generated through
carefully thought out national QA designs. For institutions, designing for impact
measurement also implies that balancing national accountability demands with local
strategic ambitions will be a key dilemma where a SWOT approach may be useful.
While most actors in the sector would like to see impact of QA to be measured, there is
a potential price to be paid if such impact designs become too costly to administer, too
bureaucratic to run, and too complex to make sense of. The mentioned European impact
evaluation project and the various articles of this special issue have provided some
insights into how we can come to grips with these challenges for the future.
Acknowledgement
Theodor Leiber and Bjørn Stensaker did part of the work on this article in the context of a
project on impact evaluation of quality management in higher education institutions, which was
16
co-funded by the European Commission (Grant no.: 539481-LLP-1-2013-1-DE-ERASMUS-
EIGF). Lee Harvey thanks the organisers of that project for inviting him to the International
Final Conference “Impact Evaluation of Quality Management in Higher Education. A
Contribution to Sustainable Quality Development of the Knowledge Society” which was held
on 16-17 June 2016, in Barcelona, Spain. This publication reflects the views only of the authors,
and the European Commission cannot be held responsible for any use which may be made of
the information contained therein.
Disclosure statement
References
Beerkens, Maarja. 2018. “Evidence-based policy and higher education quality
assurance: progress, pitfalls and promise.” European Journal of Higher
Education 8(3): [insert page numbers].
Bejan, Andrei-Stelian, Tero Janatuinen, Jouni Jurvelin, Susanne Klöpping, Heikki
Malinen, Bernhard Minke, and Radu Vacareanu. 2015. “Quality assurance and
its impact from higher education institutions’ perspectives: methodological
approaches, experiences and expectations.” Quality in Higher Education 21(3):
343-371.
Bejan, Andrei-Stelian, Radu Mircea Damian, Theodor Leiber, Iohan Neuner, Lidia
Niculita, and Radu Vacareanu. 2018. “Impact evaluation of institutional
evaluation and programme accreditation at Technical University of Civil
Engineering Bucharest (Romania).” European Journal of Higher Education
8(3): [insert page numbers].
Bell, Geoffrey G., and Linda Rochford. 2016. “Rediscovering SWOT’s integrative
nature: a new understanding of an old framework.” The International Journal of
Management Education 14(3): 310-326.
Brennan, John. 2012. “Talking about quality. The changing uses and impact of quality
assurance.” 11 pages. http://www.qaa.ac.uk/en/Publications/Documents/impact-
of-quality-assurance.pdf (accessed 24 April 2018).
Byrne, David. 2013. “Evaluating complex social interventions in a complex world.”
Evaluation 19(3): 217-228.
17
Damian, Radu, Josep Grifoll, and Anke Rigbers. 2015. “On the role of impact
evaluation of quality assurance from the strategic perspective of quality
assurance agencies in the European Higher Education Area.” Quality in Higher
Education 21(3): 251-269.
Ditzel, Benjamin. 2017. “Bedingte Wirksamkeit von QM in Studium und Lehre:
Ergebnisse einer Delphi-Studie” [Limited effectiveness of quality management
in teaching and learning: results of a Delphi study]. Zeitschrift für
Hochschulentwicklung 12(3): 17-37.
Gates, Emily, and Lisa Dyson. 2017. “Implications of the changing conversation about
causality for evaluators.” American Journal of Evaluation 38(1): 29-46.
Groff, Ruth. 2017. “Causal mechanisms and the philosophy of causation.” Journal for
the Theory of Social Behaviour 47(3): 286-305.
Harvey, Lee, and Diana Green. 1993. “Defining quality.” Assessment and Evaluation in
Higher Education 18(1): 9-34.
Harvey, Lee, and James Williams. 2010a. “Editorial: fifteen years of Quality in Higher
Education. Part One.” Quality in Higher Education 16(1): 3-36.
Harvey, Lee, and James Williams. 2010b. “Editorial: fifteen years of Quality in Higher
Education. Part Two.” Quality in Higher Education 16(2): 81-113.
Helms, Marilyn M., and Judy Nixon. 2010. “Exploring SWOT analysis – where are we
now? A review of academic research from the last decade.” Journal of Strategy
and Management 3(3): 215-251.
Hill, Terry, and Roy Westbrook. 1997. “SWOT analysis: it’s time for a product recall.”
Long Range Planning 30(1): 46-52.
ICP [IMPALA Consortium Partners]. 2016. Impact Evaluation of Quality Assurance in
Higher Education. A Manual. 41 p.
https://www.evalag.de/fileadmin/dateien/pdf/forschung_international/impala/im
pala_manual_161212_v4.pdf (accessed 24 April 2018).
IMPALA. 2016. Impact Analysis of External Quality Assurance Processes of Higher
Education Institutions. Pluralistic Methodology and Application of a Formative
Transdisciplinary Impact Evaluation.
https://www.evalag.de/en/international/impact-analysis/the-project/ (accessed 24
April 2018).
18
Jurvelin, Jouni, Matti Kajaste, and Heikki Malinen. 2018. “Impact evaluation of EUR-
ACE programme accreditation at Jyväskylä University of Applied Sciences
(Finland).” European Journal of Higher Education 8(3): [insert page numbers].
Kajaste, Matti, Anna Prades, and Harald Scheuthle. 2015. “Impact evaluation from
quality assurance agencies’ perspectives: methodological approaches,
experiences and expectations.” Quality in Higher Education 21(3): 270-287.
Leeuw, Frans, and Jos Vaessen. 2009. Impact Evaluations and Development. NONIE
Guidance on Impact Evaluation. Washington: The Network of Networks on
Impact Evaluation (NONIE).
Leiber, Theodor. 2016. “Impact evaluation of quality management in higher education.
A contribution to sustainable quality development of the knowledge and
learning society.” Qualität in der Wissenschaft 10(1): 3-12.
Leiber, Theodor. 2017. “Computational social science and Big Data: a quick SWOT
analysis.” In Berechenbarkeit der Welt? Philosophie und Wissenschaft im
Zeitalter von Big Data [Computability of the World? Philosophy and Science in
the Age of Big Data], edited by Joerg Wernecke, Wolfgang Pietsch, and
Maximilian Ott, 287-301. Berlin: Springer.
Leiber, Theodor. 2018. “Impact evaluation of quality management in higher education:
a contribution to sustainable quality development in knowledge societies.”
European Journal of Higher Education 8(3): [insert page numbers].
Leiber, Theodor, Nana Moutafidou, and Bertram Welker. 2018. “Impact evaluation of
Programme Review at University of Stuttgart (Germany).” European Journal of
Higher Education 8(3): [insert page numbers].
Leiber, Theodor, Anna Prades, and Mari Paz Álvarez. 2018. “Impact evaluation of
programme accreditation at Autonomous University of Barcelona (Spain).”
European Journal of Higher Education 8(3): [insert page numbers].
Leiber, Theodor, Bjørn Stensaker, and Lee Harvey. 2015. “Impact evaluation of quality
assurance in higher education: methodology and causal designs.” Quality in
Higher Education 21(3): 288-311.
Little, Daniel. 2015. “Guiding and modeling quality improvement in higher education
institutions.” Quality in Higher Education 21(3): 312-327.
Liu, Shuiyun, Minda Tan, and Zhaorui Meng. 2015. “Impact of quality assurance on
higher education institutions: a literature review.” Higher Education Evaluation
and Development 9(2): 17-34.
19
Mueller, Christoph. E., Hansjoerg Gaus, and Joerg Rech. 2014. “The counterfactual
self-estimation of program participants: Impact assessment without control
groups or pretests.” American Journal of Evaluation 35(1): 8-25.
Newton, Jethro. 2013. “Is quality assurance leading to enhancement?” In How Does
Quality Assurance Make a Difference?, edited by Fiona Crozier et al., 8-14.
Brussels, European University Association.
Norgbey, Enyonam B. 2016. “Debate on the appropriate methods for conducting impact
evaluation of programs within the development context.” Journal of
Multidisciplinary Evaluation 12(27): 58-66.
Panagiotou, George. 2003. “Bringing SWOT into focus.” Business Strategy Review
14(2): 8-10.
Panagiotou, George, and Riëtte van Wijnen. 2005. “The ‘telescopic observations’
framework: an attainable strategic tool.” Marketing Intelligence and Planning
23(2): 155-171.
Piercy, Nigel, and William Giles. 1989. “Making SWOT analysis work.” Marketing
Intelligence and Planning 7(5/6): 5-7.
Rosa, Maria J., Claudia Sarrico, and Alberto Amaral. 2012. “Academics’ perceptions on
the purposes of quality assessment.” Quality in Higher Education 18(3): 349-
366.
Seyfried, Markus, and Philipp Pohlenz. 2018. “Assessing quality assurance in higher
education: quality managers’ perceptions of effectiveness.” European Journal of
Higher Education 8(3): [insert page numbers].
Shah, Masood. 2012. “Ten years of external quality audit in Australia: evaluating its
effectiveness and success.” Assessment and Evaluation in Higher Education
37(6): 761-772.
Steel, David. 2011. “Causality, causal models, and social mechanisms.” In The SAGE
Handbook of the Philosophy of Social Sciences, edited by Ian C. Jarvie and
Jesus Zamora-Bonilla, 288-304. Thousand Oaks, CA: Sage.
Stensaker, Bjørn. 2008. “Outcomes of quality assurance: a discussion of knowledge,
methodology and validity.” Quality in Higher Education 14(1): 3-13.
Stensaker, Bjørn, Liv Langfeldt, Lee Harvey, Jeroen Huisman, and Don F.
Westerheijden. 2011. “An in-depth study on the impact of external quality
assurance.” Assessment and Evaluation in Higher Education 36(4): 465-478.
20
Suchanek, Justine, Manuel Pietzonka, Rainer Kuenzel, and Torsten Futterer. 2012. “The
impact of accreditation on the reform of study programmes in Germany.”
Higher Education Management and Policy 24(1): 1-24.
Yin, Robert K. 2013. “Validity and generalization in future case study evaluations.”
Evaluation 19(3): 321-332.
White, Howard. 2010. “A contribution to current debates in impact evaluation.”
Evaluation 16(2): 153-164.
21
Figures and Tables
22
Table 1. Telescopic Observations Matrix (TOM) (Panagiotou and van Wijnen 2005,
161)
T E L E S C O P I C O B S E R V A T I O N S
Strengths
Weaknesses
Opportunities
Threats
23
Table 2. TOM for SWOT analysis of impact evaluation methodology of QA in HEIs
Before-after comparison Ex-post analysis
Partici- Analysis Causal Partici- Analysis Counter- Causal
pants’ of docu- social pants’ of docu- factual self- social
assess- ments and mecha- assess- ments and estimation mecha-
ments data nisms ments data nisms
Strengths
Weak-
nesses
Oppor-
tunities
Threats
24
Table 3. TOM area-specific strategy matrix for strengths, weaknesses, opportunities
and threats identified by TOM (after Panagiotou and van Wijnen 2005, 164)
Weaknesses (W) Opportunities (O) Threats (T)
1. 2. 3. ... 1. 2. 3. ... 1. 2. 3. ...
2.
3.
...
25
Table 4a. Methodological SWOTs of before-after comparison impact evaluation
Strengths Weaknesses
1. Avoidance of complete dependence on ex- 1. No explicit causal counterfactual available
post available data (cannot be overcome)
Opportunities Threats
1. Devising the causal network: causal social 1. Proper implementation of methodology
mechanisms; analytical models 2. Attribution problem
2. Feasibility of dense longitudinal analyses 3. Fluctuating respondent groups
4. Expenditure (workforce, time, money)
5. Dependence of impact evaluation on
stakeholders’ biases (partiality)
26
Table 4b. Strategy matrix corresponding to Table 4a
W O T
1. 1. 2. 1. 2. 3. 4. 5.
S S/W S/O S/T
1. – Use data of Carry out dense – Use hypotheses about – – –
simultaneous before- series of surveys causal network
after surveys to to establish dense mechanisms to
develop hypotheses longitudinal approach the
about causal network analyses attribution problem
mechanisms
27
Table 5a. Methodological SWOTs of ex-post analysis impact evaluation
Strengths Weaknesses
1. Applicable without specific methodological 1. No explicit causal counterfactual available
preparation (cannot be overcome)
2. Dependence on ex-post available data
(cannot be overcome)
Opportunities Threats
– 1. Attribution problem
2. Fluctuating respondent groups
3. Expenditure (workforce, time, money)
4. Dependence of impact evaluation on
stakeholders’ biases (partiality)
28
Table 5b. Strategy matrix corresponding to Table 5a
W O T
1. – 1. 2. 3. 4.
S S/W S/O S/T
1. – – Use data (correlation and causality) of QA procedures (e.g. – – –
evaluation recommendations, follow-up activities, and re-
evaluations) to approach the attribution problem
29
Table 6a. Methodological SWOTs in impact evaluation referring to (change)
assessments by participants and analysis of documents and data (before-after & ex-post)
Strengths Weaknesses
1. Surveys of various type 1. No explicit causal counterfactual available
2. Participant observation (e.g. in status (cannot be overcome)
seminars and workshops) 2. Ex-post: dependence on ex-post available
3. Analysis of documents and data data (cannot be overcome)
Opportunities Threats
– 1. Attribution problem
2. Adaptation of survey instruments to
respondents’ context
3. Ensure representativeness of respondent
groups and high response rates
4. Fluctuating respondent groups
5. Expenditure (workforce, time, money)
6. Dependence of impact evaluation on
stakeholders’ biases (partiality)
30
Table 6b. Strategy matrix corresponding to Table 6a
W O T
1. 2. – 1. 2. 3. 4. 5. 6.
S S/W S/O S/T
1. – – – Use data (correlation and causality) – from – – – – –
surveys, participant observation, document
2. – – – analysis – of QA procedures (e.g. evaluation – – – – –
3. – – – recommendations, follow-up activities, and re- – – – – –
evaluations; participatory efficacy assessments)
to approach the attribution problem
31
Table 7a. Methodological SWOTs in impact evaluation referring to counterfactual self-
estimation (before-after & ex-post)
Strengths Weaknesses
1. Counterfactual available 1. Restriction to own intentional states
(cannot be overcome)
Opportunities Threats
– 1. Attribution problem
2. Memorisation problems
3. Deficits in self-analysis of intentional states
4. Fluctuating respondent groups
5. Expenditure (workforce, time, money)
6. Dependence of impact evaluation on
stakeholders’ biases (partiality)
32
Table 7b. Strategy matrix corresponding to Table 7a
W O T
1. – 1. 2. 3. 4. 5. 6.
S S/W S/O S/T
1. – – Use counterfactual of own intentional – – – – –
states to approach attribution problem
33
Table 8a. Methodological SWOTs in impact evaluation referring to causal social
mechanisms (before-after & ex-post)
Strengths Weaknesses
1. Explanation by mechanisms –
Opportunities Threats
– 1. Attribution problem
2. Fluctuating respondent groups
3. Expenditure (workforce, time, money)
4. Dependence of impact evaluation on
stakeholders’ biases (partiality)
34
Table 8b. Strategy matrix corresponding to Table 8a
W O T
– – 1. 2. 3. 4.
S S/W S/O S/T
1. – – Use identification of cause-effect mechanisms to – – –
approach/solve attribution problem
35