Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

Performance or measurement?

J. STEPHEN TOWN

ABSTRACT
This paper has been updated from the article that appeared in the ‘Proceedings of the
2nd Northumbria International Conference on Performance Measurement in Libraries
and Information Services 1997’. It questions whether current UK systems of perform-
ance measurement and associated data collection activities are appropriate and sug-
gests some hypotheses from which an improved framework might be developed.

Introduction

Are current systems of library performance measurement failing to demon-


strate good performance and identify success? Are those in use, particularly
in academic libraries in the UK, becoming less immediate to the concerns
which we, as library managers, face because they fail to meet the main object-
ive for which they were designed; that is, to demonstrate good performance
and identify success? This paper contends that the current data collection
methods and structures obscure, rather than illuminate, performance and pro-
vide a misleading picture of what performance is or should be; and that the
implicit model of academic librarianship which informs the choice of current
measures is becoming increasingly irrelevant and outdated. If we are to per-
form effectively in the future we need to change our performance measure-
ment system. In effect the choice is between future performance or the current
approaches of measurement because behaviour tends to follow measures.
Hence the question posed by the title: performance or measurement?

If the current model of measurement is wrong, what is it about it that is


wrong? If we need a new model or system how might it be developed? The
first question is simply answered by recognition of the fact that current sys-
tems of measurement are based on inductive reasoning. We count everything
we can and then attempt to construct a performance measurement system

Performance Measurement and Metrics, vol. 1, no. 1, April 2000, pp. 43–54
Performance Measurement and Metrics vol. 1, no. 1

based on the observations. Historically library measures concentrated firstly


on collections, then activity associated with collections, and only then on
users and usage. More recently, financial issues impinged. This implicit order
of significance can be seen clearly from the order in which the elements
follow each other in the UK SCONUL Statistics (SCONUL, Annual). Thus
we reach a data set based on accretion and practicality with no underlying
assumptions about why the elements are significant, but which, however, are
assumed to equate to library performance measurement.

The answer to the second question suggests adopting a hypothetico-deductive


approach to defining performance measures. In this paper, four hypotheses
relating to academic libraries and their future are therefore used to deduce
possible improved frameworks for performance measurement.

The recent history of performance measurement development in the UK pro-


vides cause for concern in that attempts have been made to define new frame-
works, but these have foundered on apparent reluctance within the academic
library community to engage with either different assumptions or changes in
data collection.

The publication of the Effective Academic Library (hereafter EAL) (JFC,


1995) demonstrated a willingness to contemplate a broader framework. How-
ever the prior existence of statistical collection seems to have influenced the
final product away from the logic of any new hypothesis towards again count-
ing what we can count or have always counted. The results of this were the
area of ‘effectiveness’ being replaced by ‘delivery’; ‘customer satisfaction’
being deemed unworthy of national collection; and ‘integration’ being releg-
ated to a set of qualitative statements. The result was a complex set of non-
intuitive indicators based on traditional counting. When further work was
undertaken to define a minimal set of performance indicators for the sector
one of the explicit boundaries was that no new counting must be required
(Barton and Blagden, 1998). In addition the final recommendations could
only be labelled ‘management statistics’ rather than performance indicators
because of the recognition that they could not in themselves convey a target
or a value. Thus it seems that the apparently new framework for performance
measurement has not replaced the former primarily inductive approach

What we are now collecting and publishing as library performance measures


or statistics (SCONUL/HCLRG, 1999) is consequently some way from the
needs of practitioners in demonstrating performance to various stakeholders.
It only addresses one of the five stakeholder perspectives suggested by the
‘balanced scorecard’ approach (Kaplan and Norton, 1996), and that incom-
pletely. It does not provide a simple and transparent account for the invest-
ment made in academic libraries nationally, even though that might be one
of the main rationales for publication, nor does it provide adequate informa-
tion for presenting to local stakeholders at any level. It is not clear from the
figures which libraries might be ‘best’, and qualitative data is completely
absent. Non-standard processes and services are not encompassed, and the

44
April 2000 Performance or measurement

considerable amount of project-based work which developing libraries are


involved in is not identifiable. The voice of the user is silent.

This statistical approach also does not match the way in which UK universit-
ies are increasingly required to account for their performance in teaching and
research by other mechanisms in addition to financial and management data.
These mechanisms involve both quality assurance and traditional patterns of
peer review. The Research Assessment Exercise is an example of the latter;
Academic Audit and Assessment of the Quality of Education (known as
TQA) of the former, although as a score is given in the latter it also begins
to look like peer review. The question arising from this is why UK academic
libraries have developed and retained a performance measurement system
based on batteries of performance indicators when the standard methodolo-
gies of the higher education enterprise now take a different approach?

The current statistics collected can be used to a limited extent for decision
support and local casework. The limitations of the data for this may be seen
from an example of recent local experience in deciding on serial cancella-
tions. To achieve the final decision there was a need to develop a completely
different additional database of information. This included data on usage of
titles, cost by subject, academic judgements of serial quality, and electronic
availability. Little of this data was routinely collected, despite the fact that
this is our largest area of acquisitions expenditure, mainly because it is not
required for national statistics. This is almost certainly because counting use
of serials is difficult whilst counting loans of books is not. The logical balance
of effort in assessing usage should surely be towards the largest expense
items. This is not the case at present because the current systems are not
designed for use.

Four hypotheses for a new framework

In order to try and develop a new framework for academic library perform-
ance measurement in the UK employing a hypothetico-deductive approach I
would offer four different hypotheses which might assist in defining meas-
ures. These are:

I Total quality management (TQM)


I Digital developments
I Library ‘development stage’
I Staff as a key resource

Total quality management

Deming (1986) suggested that one of the ‘seven deadly diseases’ of western
industry was ‘Management by use only of visible figures, with little or no

45
Performance Measurement and Metrics vol. 1, no. 1

consideration of figures that are unknown or unknowable’. This criticism


could be levelled at the current state of library performance measurement
generally, at least as it appears from national and international approaches to
data collection and performance measurement. Visible measures in industry
are short-term financial, and ignore customer satisfaction, employee morale,
or community impact. Library measures, whilst not being solely focused on
the former, have certainly neglected the latter.

Another problem with performance measurement is that the measures chosen


implicitly begin to define what is important. G.L. Smith points out that ‘The
reality of measuring unconstrained human behaviour is that the act of measur-
ing a particular indicator will induce behaviours which have as their objective
maximising performance of the indicator, virtually regardless of its effect on
the organisation as a whole.’ In other words, ‘measures don’t track behaviour,
but rather behaviour tracks measures’. Oakland (1993) criticises the perform-
ance measures used by industry: ‘Typical harmful summary measures of local
performance are purchase price, plant efficiencies, direct labour costs, and
ratios of direct to indirect labour. These are incompatible with quality
improvement measures such as process and throughput times, delivery per-
formance, inventory reductions, and increases in flexibility, which are first
and foremost non-financial.’

If we analyse the kind of measures suggested by Deming and Oakland and


assess the data collected in SCONUL Statistics and the measures suggested
in EAL, we can see the mismatch from Table 1. In almost all cases relevant
measures are missing.

Total quality management provides ready-made frameworks for performance

Table 1. Comparison of measures of Deming (1986) and Oakland (1993);


SCONUL (Annual); and Effective Academic Library

Deming/Oakland SCONUL Statistics EAL


1. Satisfaction X QA and local only
2. Morale X – Staff numbers X – Efficiency
3. Impact X X – Delivery per
student
4. Throughput time X – Process X
numbers
5. Delivery X – Delivery Local service
performance numbers standard?
X – Economy
6. Inventory reductions X – Inventory size X
and increase
7. Flexibility X Local development
targets?

46
April 2000 Performance or measurement

measurement. Tenner and DeToro (1992) suggest that TQM consists of three
activities: customer focus, process improvement, and total involvement of
staff, which lead to continuous improvement. Customer focus and satisfaction
might depend on successful identification of customers, understanding cus-
tomer expectations, and understanding customers at a deeper level through
techniques such as designed surveys, ‘mystery shopping’ and benchmarking.
All have been tried in libraries, but with little impact on national and interna-
tional performance measurement approaches. Critical success factors provide
one basis for defining performance measures in a TQM organisation, and
these are in use as a basis for measurement in the small number of academic
libraries which have adopted TQM philosophies.

TQM and related philosophies can provide a number of other suggestions


for performance measurement. The work of Zeithaml et al. (1990) and their
SERVQUAL approach to measuring service quality is now being used in
some academic libraries. The main result of drawing on a TQM hypothesis
for performance measures is the inescapable conclusion that customer satis-
faction is the critical measure.

TQM hypothesis measurement – conclusions

Thus the measures, or activities which produce measures, suggested by this


hypothesis are:

I Satisfaction surveys
I Designed surveys for improvement
I Benchmarking
I Customer care, involving measures of
Personal service
Materials service
I ‘Mystery shopper’
I SERVQUAL
I Process throughput and delivery times

Digital developments

The second suggestion for a hypothesis on which to base a future measure-


ment system is the recognition of an increasingly digital future for informa-
tion services. Four areas suggest themselves for consideration:

47
Performance Measurement and Metrics vol. 1, no. 1

Information strategies and strategic measurement systems

There is a current demand by UK funding councils for UK universities to


have an ‘information strategy’. This demand is a further development of the
trend in which there have been previous fashions for IT strategies, then IS
strategies. The third phase arises from the recognition that university
‘information’ resides in other places than libraries and IT centres and particu-
larly in MIS data. Strategies are designed for change, therefore measures of
change achieved are going to be important in this context. Secondly, the
strategy should be based on the information needs of those working within
the institution; therefore measures and methods of assessing need are critical
to a successful strategy.

The electronic library

The digital, virtual or electronic library is already quite rightly beginning to


generate its own subfield of performance measurement. Irrespective of what
model of the future we ascribe to, the inevitable shift of focus from the
university library to the user’s desktop will have a profound effect on
information services and the measurement of its performance. Firstly, the
extreme view of complete ‘demediation’, with all information freely available
on the Internet and the consequent death of libraries, books, and literacy,
would also clearly result in the death of library performance measurement.
Network activity measurement might need to replace it, but it seems to me
that current efforts to produce counts of electronic document use across net-
works are being undertaken merely as an extension of the current forms of
statistical collection. It seems inappropriate and illogical for the library ele-
ment of networked information services to be seeking or demanding very
complex counting of the use of certain types of electronic information just
because they are replacing information formerly held in hard copy in the
library, while no effort is made to count the mass of other information use
happening over the network. Work done in acquiring, storing and lending
traditional forms of information may have been worth recording in the past
when this involved effort; counting use in an interim stage may be interesting
for various reasons; counting this activity in a mature electronic future seems
pointless, when the library is contributing nothing to the supply chain and no
longer ‘owns’ the service in any realistic sense. The collapse of the supply
chain may also, of course, destroy the need for throughput measures. Future
performance measurement will then need to be focused on the areas in which
the future library service is actually active. ‘User education at distance’ meas-
ures may ultimately be all there is to measure.

Convergence

Converged services require a broader perspective on performance measure-


ment. Whilst libraries in the UK have historically been much more active in

48
April 2000 Performance or measurement

statistical collection than university computer centres, there are elements to


learn on both sides. Computer centres might teach libraries about project
management and associated measures, and also about a more rigorous
approach to help and enquiry work data through use of help desk software.
Some of the activities which led to the creation of converged services
(internet access and gateways, the web, and campus information systems and
intranets) require performance measures to be created for them. A majority
of UK university library services now fall within some sort of converged
context. This is not adequately reflected in national statistics, although there
are good prospects for cooperation.

Teaching and learning developments

Management of collective learning systems employing multimedia pro-


grammes may become a future role for libraries; and if so appropriate per-
formance measurements will need to be developed.

Digital measurement – conclusions

This hypothesis suggests the need for:

I Project management measures


I Supply chain shortening measures
I Future abandonment of all collection and document delivery meas-
ures
I Help desk software-based measures (outcome)
I ‘User education at distance’ measures
I Collective learning system measures

Library development stage

Both the above hypotheses hint at the need for libraries in higher education
to think more broadly about themselves, their role and philosophy, as well
as about their systems of performance measurement. This point was made by
Lancour as long ago as 1951 (Lancour, 1951), and reiterated by James
Thompson more recently (Thompson, 1991). The latter’s ‘Redirection in Aca-
demic Library Management’ is based around a historical analysis of what is
often the main vehicle for accounting for a university library’s activity: the
annual report. Lancour suggests that there are three stages of academic library
development:

I ‘Storehouse’ period

49
Performance Measurement and Metrics vol. 1, no. 1

I ‘Service’ period
I ‘Educational function’ period

That measures from the first hypothesis above are not widely considered as
a basis for library performance measurement reinforces Thompson’s conclu-
sion that, over 40 years after Lancour wrote, our libraries have still not fully
progressed to stage 2. The conclusion for performance measurement of
moving to stage 3 is an acceptance that we must measure the educational
impact of our activities. Previous authorities have suggested that ‘higher
order’ or ‘impact’ effects are difficult to measure, interpret or act upon
(Abbott, 1994). In a digital future there may be little else to fall back on.
Cyert suggested that ‘the critical educational aim is ensuring that students
learn to learn’; therefore perhaps we should set about demonstrating that our
activities have had this effect on students by quantifying how their ‘informa-
tion literacy’ has improved. This might involve measuring competence on
arrival within the institution and competence on departure.

A stronger educational role also demands closer organisational fit, and like
the first hypothesis therefore suggests a stronger marketing activity and a
greater importance for measures of ‘integration’. In addition, the idea of a
library brand may be relevant. In reality most academic libraries have an
implicit brand which their customers could readily define for them. In the
digital future when we no longer have a monopoly on the supply of informa-
tion and we cannot rely on buildings or staff to reinforce our value, a reco-
gnisable brand for the information we supply based strongly on its contribu-
tion to the educational enterprise may be the guarantee of survival. ‘Our’
information and educational products may be differentiated from compet-
itors’, and customer confidence can be built in our conduits or value-added
services, resulting in a continuing partnership.

Library development stage measurement – conclusions

The following bases for measurement are suggested:

I Impact
I ‘Competence improvement’-based user measures
I Market segmentation
all financial measures
all collection measures
relationship and activity measures
I Brand definition and performance
I ‘Message’ penetration
Customers
Staff

50
April 2000 Performance or measurement

Staff

Finally, a plea for staff issues to be reflected in performance measurement.


It is broadly accepted that a key issue in operating a successful service is the
performance of staff. There may be obvious reasons why there is a reluctance
to share data on staff performance; but we should be able to account appropri-
ately for what is usually our largest area of expenditure.

All three previous hypotheses suggest that staff and staff development are
critical to the future of libraries. This applies in a TQM environment where
total involvement of staff is required to achieve excellence; in the digital
future where staff will only retain a role if they continue to add significant
value and remain in ‘high touch’ with their customers; and in a library in the
‘educational function’ stage of development where the library staff will be as
critical to the educational process as academic staff. Our current performance
measurement systems and statistical collections treat staff merely as snooker
balls of different colours, where type and notional value are the main consid-
erations. A more sophisticated approach is clearly required.

Leadership, teamwork, staff involvement in improvement, staff competence


and commitment, morale and culture, and staff development are all critical
to a successful library. There is currently no agreement on appropriate or
common measurement systems in these areas within libraries. There are how-
ever measurement systems and standards existing, for instance the Investors
in People (IIP) award scheme in the UK, which can provide suitable frame-
works. Our experience locally of gaining IIP, in taking up Blanchard’s Situ-
ational Leadership II model, and assessing staff views of departmental quality
through a standard indicator have all been very significant factors in our
organisational development.

Staff measurement – conclusions

Thus measures or systems which might assist in the area of staff include:

I Leadership model and penetration


I Leadership audit
I Departmental indicators
I Staff appraisal and development systems
I Team performance
I Cultural analysis
I IIP
I Time management, allocation and key result areas

51
Performance Measurement and Metrics vol. 1, no. 1

New Framework

In order to provide some positive synthesis from these hypotheses, I would


offer the following.

Integration
The key to the future of information services in any scenario will be the
ability to match services very closely to what their customers require.
Funding models will almost certainly be increasingly linked to precise univer-
sity activities, and the degree to which central services demonstrably play a
role in supporting those activities will determine their share of the resources.
Measurement of integration therefore becomes the primary data set required
for survival. The logical conclusion therefore is that all library measures
should be presented from a subject perspective, as this is how universities
are obliged to present the bulk of their data. All current approaches tend to
be based on statistics for the overall library situation.

Improvement
In order to develop towards the digital future, change is self-evidently neces-
sary. Both customers and paymasters want improvement. This means libraries
and information services must demonstrate effective management of develop-
ment and improvement projects. The significant qualitative differences
amongst UK university libraries are not usually apparent from published stat-
istics, although hints may be drawn from trends. The qualitative differences
may often stem from the willingness of the library to engage positively with
change, and the degree to which the organisation can gear itself not just to
coping with ‘business as usual’ but to achieving successful developments.
These may be either practical service improvements or significant strategic
or service developments. We need to develop measures which can be used
to show both this responsiveness and the ability to manage and complete
projects.

Customers
I do not believe that it is satisfactory to leave customer satisfaction measures
out of national collections, or indeed for libraries to ignore customer related
data altogether. It could and should be one of the most powerful data sets in
our armoury. In comparison to many other services and industries, our gen-
eral satisfaction ratings tend to be very high. Rather than questioning the
validity of this or blurring the results by discussions of comparative expecta-
tions, we should perhaps be taking some pleasure from the achievement.
Integration and customer-related data together demonstrate ‘effectiveness’ in
the academic setting. They can show that we are doing the right things and

52
April 2000 Performance or measurement

that what we do is satisfying our customers much more clearly than the
general activity and usage data we currently collect.

Staff
Staff provide the key competitive difference in universities generally. Cus-
tomers of universities often make their choice on teaching and research
excellence, and those aspects are delivered by people. Libraries could share
this approach by measuring and presenting staff performance more openly;
in some areas, for instance staff development, we may often be ahead of our
colleagues.

Value added

We operate an information delivery chain. That chain may be open to much


greater competition in future, and we need measures to reflect the value we
add. This cannot be done by simple quantitative counts of items selected,
processed, indexed or catalogued. We need to develop systems to measure
precisely what value these activities add for the customer.

Value for money

Despite criticisms above of financial measures, we all have stakeholders who


require efficiency, value for money or simply low costs. Current published
measures often obscure rather than illuminate comparisons of unit costs. It is
an unusual business that cannot say precisely how much it costs to deliver
any of its products, and I do not believe we will continue to get away with
it. Work is obviously needed to reach agreement on how these unit cost
measurements might be derived and standardised.

Learning

Finally, we need some measures which directly link our activities to learning.
We need to demonstrate the contribution that we make to the enterprise, even
if the enterprise itself finds it difficult to define outcome measures.

Conclusion

The conclusion is simply that the philosophy of performance should come


before the science of measurement. Our existing systems and presentation of
data currently define a concept of performance which is too narrow for both
current and future activities. A new perspective on performance, encom-
passing a broader range of issues and activities, is likely to provide a better
framework for measurement.

53
Performance Measurement and Metrics vol. 1, no. 1

References

Abbott, C. (1994). Performance Measurement in Library and Information


Services. Aslib Know How Series. London: Aslib.

Barton, J. and Blagden, J. (1998). Academic Library Effectiveness: A Com-


parative Approach. British Library Research and Innovation Report 120.
BLRIC.

Deming, E.W. (1986). Out of Crisis: Quality, Productivity and Competitive


Position. Cambridge: Cambridge University Press.

JFC (1995). The Effective Academic Library. Bristol: HEFCE/Joint Funding


Councils’ Ad-hoc Group on Performance Indicators for Libraries.

Kaplan, R.S. and Norton, D.P. (1996). The Balanced Scorecard: Translating
Strategy Into Action. Boston: Harvard Business School Press.

Lancour, H. (1951). Training for librarianship in North America. Library


Association Record (Sept), pp. 280–284.

Oakland, J.S. (1993). Total Quality Management: The Route to Improving


Performance. 2nd edn. Oxford: Butterworth Heinemann.

SCONUL (Annual). Annual Library Statistics. London: SCONUL.

SCONUL/HCLRG (1999). UK Higher Education Library Management Stat-


istics 1997–98. SCONUL/HCLRG.

Tenner, A.R. and DeToro, I.J. (1992). Total Quality Management: Three
Steps to Continuous Improvement. Wokingham: Addison-Wesley.

Thompson, J. (1991). Redirection in Academic Library Management.


London: Library Association.

Zeithaml, V.A., Parasuraman, A. and Berry, L.L. (1990). Delivering Quality


Service: Balancing Customer Perceptions and Expectations. New York: Free
Press.

Author

Stephen Town is Director of Information Services at the Shrivenham


Campus of Cranfield University and also holds the title of Deputy University
Librarian.

54

You might also like