Professional Documents
Culture Documents
Perf Measr Research
Perf Measr Research
J. STEPHEN TOWN
ABSTRACT
This paper has been updated from the article that appeared in the ‘Proceedings of the
2nd Northumbria International Conference on Performance Measurement in Libraries
and Information Services 1997’. It questions whether current UK systems of perform-
ance measurement and associated data collection activities are appropriate and sug-
gests some hypotheses from which an improved framework might be developed.
Introduction
Performance Measurement and Metrics, vol. 1, no. 1, April 2000, pp. 43–54
Performance Measurement and Metrics vol. 1, no. 1
44
April 2000 Performance or measurement
This statistical approach also does not match the way in which UK universit-
ies are increasingly required to account for their performance in teaching and
research by other mechanisms in addition to financial and management data.
These mechanisms involve both quality assurance and traditional patterns of
peer review. The Research Assessment Exercise is an example of the latter;
Academic Audit and Assessment of the Quality of Education (known as
TQA) of the former, although as a score is given in the latter it also begins
to look like peer review. The question arising from this is why UK academic
libraries have developed and retained a performance measurement system
based on batteries of performance indicators when the standard methodolo-
gies of the higher education enterprise now take a different approach?
The current statistics collected can be used to a limited extent for decision
support and local casework. The limitations of the data for this may be seen
from an example of recent local experience in deciding on serial cancella-
tions. To achieve the final decision there was a need to develop a completely
different additional database of information. This included data on usage of
titles, cost by subject, academic judgements of serial quality, and electronic
availability. Little of this data was routinely collected, despite the fact that
this is our largest area of acquisitions expenditure, mainly because it is not
required for national statistics. This is almost certainly because counting use
of serials is difficult whilst counting loans of books is not. The logical balance
of effort in assessing usage should surely be towards the largest expense
items. This is not the case at present because the current systems are not
designed for use.
In order to try and develop a new framework for academic library perform-
ance measurement in the UK employing a hypothetico-deductive approach I
would offer four different hypotheses which might assist in defining meas-
ures. These are:
Deming (1986) suggested that one of the ‘seven deadly diseases’ of western
industry was ‘Management by use only of visible figures, with little or no
45
Performance Measurement and Metrics vol. 1, no. 1
46
April 2000 Performance or measurement
measurement. Tenner and DeToro (1992) suggest that TQM consists of three
activities: customer focus, process improvement, and total involvement of
staff, which lead to continuous improvement. Customer focus and satisfaction
might depend on successful identification of customers, understanding cus-
tomer expectations, and understanding customers at a deeper level through
techniques such as designed surveys, ‘mystery shopping’ and benchmarking.
All have been tried in libraries, but with little impact on national and interna-
tional performance measurement approaches. Critical success factors provide
one basis for defining performance measures in a TQM organisation, and
these are in use as a basis for measurement in the small number of academic
libraries which have adopted TQM philosophies.
I Satisfaction surveys
I Designed surveys for improvement
I Benchmarking
I Customer care, involving measures of
Personal service
Materials service
I ‘Mystery shopper’
I SERVQUAL
I Process throughput and delivery times
Digital developments
47
Performance Measurement and Metrics vol. 1, no. 1
Convergence
48
April 2000 Performance or measurement
Both the above hypotheses hint at the need for libraries in higher education
to think more broadly about themselves, their role and philosophy, as well
as about their systems of performance measurement. This point was made by
Lancour as long ago as 1951 (Lancour, 1951), and reiterated by James
Thompson more recently (Thompson, 1991). The latter’s ‘Redirection in Aca-
demic Library Management’ is based around a historical analysis of what is
often the main vehicle for accounting for a university library’s activity: the
annual report. Lancour suggests that there are three stages of academic library
development:
I ‘Storehouse’ period
49
Performance Measurement and Metrics vol. 1, no. 1
I ‘Service’ period
I ‘Educational function’ period
That measures from the first hypothesis above are not widely considered as
a basis for library performance measurement reinforces Thompson’s conclu-
sion that, over 40 years after Lancour wrote, our libraries have still not fully
progressed to stage 2. The conclusion for performance measurement of
moving to stage 3 is an acceptance that we must measure the educational
impact of our activities. Previous authorities have suggested that ‘higher
order’ or ‘impact’ effects are difficult to measure, interpret or act upon
(Abbott, 1994). In a digital future there may be little else to fall back on.
Cyert suggested that ‘the critical educational aim is ensuring that students
learn to learn’; therefore perhaps we should set about demonstrating that our
activities have had this effect on students by quantifying how their ‘informa-
tion literacy’ has improved. This might involve measuring competence on
arrival within the institution and competence on departure.
A stronger educational role also demands closer organisational fit, and like
the first hypothesis therefore suggests a stronger marketing activity and a
greater importance for measures of ‘integration’. In addition, the idea of a
library brand may be relevant. In reality most academic libraries have an
implicit brand which their customers could readily define for them. In the
digital future when we no longer have a monopoly on the supply of informa-
tion and we cannot rely on buildings or staff to reinforce our value, a reco-
gnisable brand for the information we supply based strongly on its contribu-
tion to the educational enterprise may be the guarantee of survival. ‘Our’
information and educational products may be differentiated from compet-
itors’, and customer confidence can be built in our conduits or value-added
services, resulting in a continuing partnership.
I Impact
I ‘Competence improvement’-based user measures
I Market segmentation
all financial measures
all collection measures
relationship and activity measures
I Brand definition and performance
I ‘Message’ penetration
Customers
Staff
50
April 2000 Performance or measurement
Staff
All three previous hypotheses suggest that staff and staff development are
critical to the future of libraries. This applies in a TQM environment where
total involvement of staff is required to achieve excellence; in the digital
future where staff will only retain a role if they continue to add significant
value and remain in ‘high touch’ with their customers; and in a library in the
‘educational function’ stage of development where the library staff will be as
critical to the educational process as academic staff. Our current performance
measurement systems and statistical collections treat staff merely as snooker
balls of different colours, where type and notional value are the main consid-
erations. A more sophisticated approach is clearly required.
Thus measures or systems which might assist in the area of staff include:
51
Performance Measurement and Metrics vol. 1, no. 1
New Framework
Integration
The key to the future of information services in any scenario will be the
ability to match services very closely to what their customers require.
Funding models will almost certainly be increasingly linked to precise univer-
sity activities, and the degree to which central services demonstrably play a
role in supporting those activities will determine their share of the resources.
Measurement of integration therefore becomes the primary data set required
for survival. The logical conclusion therefore is that all library measures
should be presented from a subject perspective, as this is how universities
are obliged to present the bulk of their data. All current approaches tend to
be based on statistics for the overall library situation.
Improvement
In order to develop towards the digital future, change is self-evidently neces-
sary. Both customers and paymasters want improvement. This means libraries
and information services must demonstrate effective management of develop-
ment and improvement projects. The significant qualitative differences
amongst UK university libraries are not usually apparent from published stat-
istics, although hints may be drawn from trends. The qualitative differences
may often stem from the willingness of the library to engage positively with
change, and the degree to which the organisation can gear itself not just to
coping with ‘business as usual’ but to achieving successful developments.
These may be either practical service improvements or significant strategic
or service developments. We need to develop measures which can be used
to show both this responsiveness and the ability to manage and complete
projects.
Customers
I do not believe that it is satisfactory to leave customer satisfaction measures
out of national collections, or indeed for libraries to ignore customer related
data altogether. It could and should be one of the most powerful data sets in
our armoury. In comparison to many other services and industries, our gen-
eral satisfaction ratings tend to be very high. Rather than questioning the
validity of this or blurring the results by discussions of comparative expecta-
tions, we should perhaps be taking some pleasure from the achievement.
Integration and customer-related data together demonstrate ‘effectiveness’ in
the academic setting. They can show that we are doing the right things and
52
April 2000 Performance or measurement
that what we do is satisfying our customers much more clearly than the
general activity and usage data we currently collect.
Staff
Staff provide the key competitive difference in universities generally. Cus-
tomers of universities often make their choice on teaching and research
excellence, and those aspects are delivered by people. Libraries could share
this approach by measuring and presenting staff performance more openly;
in some areas, for instance staff development, we may often be ahead of our
colleagues.
Value added
Learning
Finally, we need some measures which directly link our activities to learning.
We need to demonstrate the contribution that we make to the enterprise, even
if the enterprise itself finds it difficult to define outcome measures.
Conclusion
53
Performance Measurement and Metrics vol. 1, no. 1
References
Kaplan, R.S. and Norton, D.P. (1996). The Balanced Scorecard: Translating
Strategy Into Action. Boston: Harvard Business School Press.
Tenner, A.R. and DeToro, I.J. (1992). Total Quality Management: Three
Steps to Continuous Improvement. Wokingham: Addison-Wesley.
Author
54