Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

PERFORMANCE MEASUREMENT IN PUBLIC SECTOR SERVICES:

PROBLEMS AND POTENTIAL

Patrick Moriarty and Damian Kennedy


Department of Mechanical Engineering, Monash University-Caulfield Campus,
Australia.

Abstract
The aim of this paper is to examine both the benefits and difficulties of using
performance measurement for public sector services. We find that performance
measurement can deliver benefits if two conditions are met. First, it is necessary that a
coherent set of aims can be articulated, and be acknowledged as valid by the various
stakeholders. Second, neither the definition of the customers, nor the articulation of
their preferences, must present insuperable difficulties. If these conditions are met,
performance measurement should present no more inherent difficulties than in other
organisations, and should bring similar improvements if properly managed.

Introduction

Organisations have used performance measurement in some form for decades, if not centuries. In the past,
though, their potential use in aligning action to organisational goals was not always recognised, or acted
upon (Eccles, 1991). Performance measurement is here to stay, as it is a necessary technique for solving
the age-old problem of improving organisational effectiveness. Rigby (2001), in his periodic surveys of
management techniques, noted 44% of North American organisations surveyed in 1999 used the Balanced
Scorecard. Benchmarking, needed for the development of performance standards (Blundell and Murdock,
1997), was used by fully 76%. In Australia, performance measurement in the form of the Balanced
Scorecard (Kaplan and Norton, 1996) has been actively promoted for use in the public service sector
(Jones, 2001). Measurement of key non-financial performance factors is also widespread in the private
sector in Australia (Juchau, 2000).

Unlike the private business sector, public sector service organisations usually operate without the
discipline of market competition. Performance measurement is thus being increasingly considered for use
in these organisations as a substitute for market pressures. This change in turn is being driven by the desire
to contain costs, and to improve economy, efficiency, and effectiveness in the public sector (Wilkinson et
al., 1998; Jones, 2001). A paradoxical situation arises in that although the need for performance
measurement is great, the nature of many important public sector services makes it very difficult to apply.

The aim of this paper is to examine both the benefits and difficulties of using performance measurement
for various public sector services. This enables identification of those organisations which can benefit
greatly from performance measurement, and those where benefits will be more modest. The method
employed is a wide-ranging literature survey, supplemented by case studies. The case studies, mainly from
Australia, are drawn from the public sector, including public health, transport, and education, especially
tertiary. These three sectors account for the majority of outlays by local, state and federal governments
(Australian Bureau of Statistics, 2000). The main case study, however, is Centrelink, an organisation
which now delivers a variety of Commonwealth Government services on behalf of a number of diverse
client agencies, such as the tax office and the immigration department (Centrelink, 2001).

Since a common criticism of much writing on management is that it pays too little attention to earlier
work, the larger body of literature on service quality and benchmarking is also examined, since much of
the discussion on these topics is directly relevant to, and overlaps, that on performance measurement.
Indeed, Evans (2002) points out that in the Baldridge National Quality Award Scoring Guidelines, the

1
term ‘performance’ has been substituted for the term ‘quality’ since the mid-1990s. In addition, reference
is also made to some recent research in the areas of tertiary education and public policy making.

The paper identifies two key areas where these organisations are likely to differ most from for-profit
businesses: the formulation of strategic objectives, and the problem of customer definition and
satisfaction. Accordingly, the paper mainly focuses on how difficulties in these two areas impact on the
likely success of public sector agency performance measurement. Essentially, this study is concerned with
the preconditions for successful application, rather than the measurements themselves. Further, this paper,
while recognising that the formulation of performance indicators and their measurement will not in
themselves lead to improved organisational effectiveness, does not discuss performance management. If
meaningful performance measurements are not possible, their management is not relevant. Our
conclusions, though, should be relevant to other industrialised countries, because the difficulties facing
performance measurement found here are also present in other advanced economies.

Difficulties in defining public sector objectives

Because the profit motive is not the dominant driver, non-business service organisations often experience
difficulties in articulating strategies for achieving a coherent set of objectives (Forbes, 1998). Kaplan
(2001), in his study of nonprofit organisations has emphasised that: ‘The start of any performance
measurement system has to be a clear strategy statement’. In practice, developing such strategies should
not usually present a problem for most voluntary organisations, and should be very easy for single-issue
environmental organisations. But many government agencies acquire new objectives by accretion, so that
goals such as ‘environmental sustainability’ will be added to other existing, more traditional, objectives in
order to satisfy demands from one section of the public. Changes of government can likewise produce new
objectives. The end result may be an incompatible set of objectives, such that it is not possible to satisfy
them all simultaneously. This section examines the way in which conflict of objectives can presently arise
in the government sectors, and argues that such conflicts are likely to increase in the future.

Case study: state highway authorities


The various state highway authorities in Australia well illustrate the problem of conflicting objectives in
public sector organisations. Like education and public health, transport, particularly in urban areas,
generates externalities, both positive and negative. Positive externalities are not problematic here because
these general benefits will most likely be publicised by the relevant authorities. It is the existence of
significant negative externalities which make it so difficult for these organisations to define a coherent set
of objectives, and so to develop a meaningful set of performance measures.

Road traffic is a major contributor to greenhouse gas emissions and oil depletion, both in Australia and
globally. But it can also generate more local problems. Particularly in urban areas, major roads such as
freeways produce a variety of negative side effects, such as noise, air pollution, difficulty for non-
motorised modes, and separation of communities. New major arterials also require much land, leading to
loss of homes or parkland. These side effects will usually be unevenly distributed spatially. It is the
existence of these problems, both local and global, which gives rise to environmentally-focused groups
which contest the objectives that the highway authorities would set themselves (or would be set by the
government) in the absence of this pressure.

The controversy over the extension of the Eastern Freeway in Melbourne, Victoria, shows the way in
which objectives can be in conflict. The findings of a government-appointed panel to assess the options
were that upgrading and extending the public transport (electric trains and trams, and buses in this case)
would be the cheapest as well as the most environmentally benign option (Gibson, 1990). However, a
later, internal, study by VicRoads (the state highway authority) (VicRoads, 1995), was accepted by the
state government. The decision was made to build the freeway, despite it being rated the worst option
from the environmental viewpoint. As long as environmental effects can be effectively ignored, the state
highway authorities can themselves ignore conflicting objectives. Only those groups who would like a
broader approach to transport planning will be disappointed. But, if, as looks likely, environmental
objectives become important constraints as concerns rise about climate change and oil depletion, highway
authorities themselves could experience real organisational difficulties.

2
Benchmarking is also relevant here. The different transport modes available have very different fatality
rates per 100 million passenger-kilometres. Traffic accidents are, of course, well-recognised as an
unwanted side effect of road travel, and the highway authorities dedicate much of their efforts to reducing
their number and severity. Performance measures are best calibrated against best practice in the field, not
against the results of the previous year for the same organisation (Juran, 1992). The state highway
authorities make traffic fatality comparisons between the various states of Australia, and also with
countries overseas. However, comparisons are seldom made with public transport fatality rates per million
passenger-km. Why are performance targets not set with the much lower rates for rail travel as the
relevant benchmark? It might be argued that the comparison is unfair because rail uses well-trained
drivers, has its own right-of-way, and runs on fixed rails, which makes it inherently safer. But if it is
inherently safer, and safety is important, then there is a strong case for adopting it as the benchmark. The
conclusion is that the decision as to which benchmark to adopt is appropriate is inevitably a political one.

Future problems for objectives setting


Defining objectives for government sector agencies promises to become even more difficult in the future,
for two main reasons. First, the welfare consensus of the post-war era has broken down. In the 1950s and
1960s there was widespread agreement and commitment to the welfare state (Young, 1999). This
consensus made it much easier to formulate policy objectives. Today, populations in western countries,
including Australia, are much more heterogeneous than in the past, with a much greater range of lifestyles
and cultures, which all generate claims for special consideration. Legislation on the environment and on
equal opportunity has also given various groups more legal standing in seeking to influence government
policy.

The second obstacle facing the attainment of a unified set of objectives arises because of the mounting
severity of environmental and resource problems. All government sectors where resource/environmental
problems are relevant, including those concerned with forests, water, energy supply, agriculture, as well as
transport and urban planning, are likely to encounter this difficulty. The Inter-governmental Panel on
Climatic Change (IPCC) project that global temperatures will rise by up to 5.8 degrees Celsius over the
21st century (Wigley and Raper, 2001). Global oil production could well peak in the next decade, as non-
OPEC oil reserves are exhausted, and OPEC oil faces production constraints in meeting the increased
demand (Bakhtairi, 2002). These and other problems could fundamentally change the way that transport
planning, for instance, is conceptualised.

Difficulties in defining public sector customers and their preferences

Assessing customer satisfaction is a vital part of performance measurement, just as it is for Total Quality
Management. Neely (1998) makes this clear: ‘For the purposes of business performance the eyes of the
customer are the only ones that matter.’ Existing performance measurement frameworks (such as the
Balanced Scorecard or the self-assessment frameworks used for the various national quality awards),
invariably include a customer dimension, whether they are designed for business, government, or private
non-profit organisations. Thus the original Balanced Scorecard used four perspectives: financial,
customer, internal, and learning and growth. From the customer perspective, the central question posed
was: ‘How do we create value for our customers?’ (Kaplan and Norton, 1996). It follows that any
difficulties in unambiguously defining customers (or their wants) will translate into diffulties in devising
performance indicators.

Students at educational institutions, patients in hospitals, and clients of government welfare agencies, are
now often regarded as customers. But several problems can arise with this transfer of the notion of
customer from business to non-business organisations. In the private sector, the customer usually not only
pays for the service but also receives it. But in the non-business sector, other stakeholders can often
equally plausibly be regarded as customers, as Kaplan and Norton (2001) point out. Similarly, Morgan
and Murgatroyd (1994) show that the definition of who is the most important customer may be especially
complex in the public sector. Of course, it is well recognised that problems often occur in devising
appropriate performance indicators for customer satisfaction (or indeed, for any of the other dimensions in
a performance measurement framework) in business organisations (Castellano and Roehm, 2001), but
these difficulties can be overcome (Neely, 1998). But the difficulties faced by some public sector agencies
can be more intractable, particularly in education.

3
Case study: tertiary education
Difficulties with the customer concept do not arise merely because there are different customer groups,
such as donors and clients in a voluntary welfare organisation. In these organisations, presumably, the
aims of the donors are not in conflict with those of the clients; people donate money to such charities
expressly to help the needy. But with the tax-supported activities of the public service sector, such a
harmony of interests between those paying and those benefitting cannot be assumed. Tertiary education
provides a good illustration. In Australia, not only are primary and secondary education provided free to
all, but tertiary education also receives a heavy tax-payer subsidy, as in other countries. Also, nearly all
universities are state-run. Are the customers then only the students themselves, or should the general
public in their role as tax payers also be included? What about their potential future employers? It is also
possible to consider the students’ parents and the teaching staff as customers (Birnbaum, 2000). It is
unlikely that all these groups want the same things from tertiary education as the students themselves. The
teaching staff themselves are also divided. One faction still values the traditional aims of the university
(such as ‘the pursuit of knowledge’) while another faction, together with the administration, favour a more
market-oriented view of education.

Even if students alone are considered as the customers, there are paradoxes, as tertiary students may well
want different things from their education at different times. For example, they want both a prestigious
degree and__particularly around examination time__easy assessment, which are contradictory desires.
Difficult questions about the meaning of customer satisfaction also arise with student assessment of
faculty teaching. This is normally measured during or immediately after the subject has been taught. But
as Einstein (Swainson, 2000) reminds us, education is what remains if all we have learned has been
forgotten. Students may not like the course being assessed at the time, or may find it difficult, but years
later may come to see it as their most important or useful subject. Thus a timely measurement of customer
satisfaction, considered important for feedback to teaching staff, may give a misleading indication.

Indeed, a defensible view of tertiary teaching is one that sees an important role of teaching staff as
motivating an interest in the subject, so that the students themselves want to learn more about it in later
life. There are poor courses and poor lecturers, and some student input is desirable, but students alone
cannot be the final judges of course quality. According to Mace (2001): ‘The nature of the educational
transaction means that there is always some submission of the learner to the knowledge and expertise of
the teacher’. In summary, measuring student satisfaction is an inherently ambiguous exercise. It is not
possible to devise an unambiguous set of performance measures, weighted by customer assesment of
relative importance, as for instance, FedEx has done for customer service quality.

Even if students were clear as to their what they wanted, the needs of other customers has to be
considered. The claims of taxpayers have most weight when they provide all the funding, as with
government school education. Future employers are another stakeholder group whose claims cannot be
ignored, but even they may not be clear on what they want from prospective graduate employees. Courses
that are very relevant to their immediate needs may also be too narrow, and graduates taking such courses
may have difficulty in adapting to the rapid changes that are a feature of today’s organisations. All the
stakeholder groups, in other words, may find their interests best served by relying largely on the expertise
and professionalism of the university and its teaching staff.

Another fundamental problem occurs when private sector notions are used in education. All organisations,
whether public or private, need to concern themselves with economy, efficiency and effectiveness.
Efficiency measures how well organisations use their resources (inputs) to produce the services they
deliver (outputs) (Jones, 2001). The distinction between inputs and outputs is basic for performance
measurement. But when notions of efficiency in education are borrowed from the business sector, the
distinction is not always clear. Efforts to reduce costs per student implicitly treat students as an output. On
the other hand, when teaching effectiveness is being evaluated, the students and their attributes are usually
treated as inputs (Mace, 2001).

Case study: health


Overseas, much effort has been put into devising quality of service and performance measurement
indicators in the health sector (see, for example, Handler et al. (2001) and Sowers et al. (2001)). In
ordinary health care, as in tertiary education, the notion of the customer is likewise problematic. As Meyer

4
and Cates (1999) put it: ‘Are individuals seeking care, patients, or are they customers?’ Their simple
answer is that if they are ‘horizontal’ they must be regarded as patients; if ‘vertical’, they must be
considered as customers. Unconscious accident victims are clearly in no position to make a choice as a
customer, but even in less extreme circumstances, a sick person must rely heavily on the expertise and
professionalism of the medical staff.

In public health the primary customers might again be the patients, but if they have an infectious disease,
their preferences may need to be balanced against the requirement to protect the general public. In this
case, the potential for conflict between the interests of the primary customer and the public good are
illustrated most starkly. Similarly, liberal use of antibiotics may well be to the individual patient’s benefit,
but at the same time may help promote the rise of drug-resistant pathogens. Again, as for assessment of
tertiary education, a time dimension is involved. Excessive use of antibiotics may produce immediate
benefits (and customer satisfaction), but the long-term negative side effects may not be apparent for years.
How are the immediate benefits to primary customers to be balanced against the longer-term interests of a
far larger group? Dilemmas of this type are common in public policy.

Centrelink: a public sector success story

Since 1999, Centrelink has used a version of the Balanced Scorecard in an attempt to improve its
performance (Centrelink, 2001). Centrelink has a staff of over 24,000, and 6.3 million customers
throughout Australia. In 2000/01 it made payments on behalf of its 20 public sector client agencies of $A
51.7 billion. Associated with each of the organisation’s goals is a set of ‘Key Performance Indicators’
(KPIs) and Centrelink reports monthly on its performance against selected KPIs to its Board of
Management.

Centrelink has been able to avoid many of the difficulties besetting public sector agencies in defining
customers, even though its client organisations include those in the sectors discussed in detail here:
transport, education and health. Centrelink only administers policy on behalf of its diverse client agencies,
and this limitation is presumably recognised by its customers. For this reason, Centrelink’s KPIs for
customer satisfaction with service, such as ‘appointment wait time’, ‘call wait time’ (Centrelink, 2001)
should present no more problems than do the similar indicators in a private sector business. In line with
the increasing heterogeniety of the Australian population discussed above, the organisation does face the
problem of increasing complexity of customer inquiries and needs, which is recognised in the relevant
KPIs for customer satisfaction. Thus for ‘tier 1’ complaints the measure is the proportion finalised within
two working days, but for ‘tier 2’ and ‘tier 3’, it is the proportion finalised within five and ten working
days, respectively.

In contrast to the state highway authorities, Centrelink has been able to successfully implement its version
of the Balanced Scorecard because its objectives have been more limited. It has one government-directed
outcome: the effective delivery of government services to eligible customers, and one output, the efficient
delivery of these same services (Centrelink, 2001). Centrelink’s task is merely to deal with the customers
of the various client agencies. They are thus a customer service organisation, and can leave policy making
and objectives to their various client agencies.

This approach may appear to be merely passing the responsibility for formulating objectives back to these
various government agencies. It may be that conflict in objectives is unavoidable both inside and between
these client agencies, and that this is a healthy sign in democracies. But the crucial point to remember is
that much of their work is amenable to performance measurement, as Centrelink has shown.

Discussion

Many of the difficulties which beset both the setting of primary objectives and the determination of
customer satisfaction in public sector services arise because of the existence of externalities. Externalities,
that is, the existence of uncompensated benefits and costs, are pervasive in many public sector services,
including the health, education and transport sectors discussed in this paper. In public health, infectious
diseases can obviously produce detrimental externalities. Good education at all levels produces positive
external effects on the wider community (Mace, 2001), which is why education is compulsory up to the
mid-teens in Australia as in other developed countries, and supported by taxes.

5
It is the existence of externalities in these sectors which lie at the heart of the difficulties in defining
customers and in formulating coherent objectives. Externalities generate a variety of stakeholders. In the
quality literature this is recognised in the shift from ‘little q’ to ‘big Q’ quality, which is closely analogous
to TQM (Cameron and Thompson, 2000). Associated with this shift is a widening of the definition of
customer in quality management, which now includes all those affected by the product or service, whether
internal or external to the organisation. But in many public sectors, it is misleading to think in terms of
customers however defined. An alternative view would stress the importance of the notions of community
and citizenship, of duties as well as rights (Wilkinson et al., 1998).

Other researchers have recognised, in the words of Jennings and Staggers (1999) that: ‘Decision making
becomes increasingly more difficult as the number of objectives and stakeholders increase’. Writing from
a health care viewpoint, they advocate the use of matrices to clarify the tradeoffs that are necessary as the
various objectives are considered in relation to each other. But performance measurement matrices require
knowledge of the relative weight to place on each performance target (Carlin, 1999). While this should be
possible for private business organisations, given their more limited objectives, it will often be impossible
in the public sector. Since it is unlikely that the different stakeholders can reach consensus on the relative
importance of the different objectives, such tradeoffs can only be resolved by the political process.

The analysis of the state highway authorities clearly shows that, as presently structured, conflicting
objectives are unavoidable in these organisations. Their problems are very similar to those public sector
organisations which have both a regulatory and promotional function. The solution involves separation of
these functions. In the case of the highway authorities, the state and urban transport planning function
needs to be completely separated from that of road design and construction, and traffic management. At
present, whether intended or not, the various state highway authorities function as de facto transport
planners for their states. With a more simplified set of objectives, the state highway organisations should
be able to implement performance measurement even if there occur major shifts in government transport
policy. The more these organisations are left with merely engineering functions, the more likely it is that
performance measurement can be successfully implemented.

In essence we are arguing that the public sector can only successfully adopt performance measurement to
the extent that they resemble private businesses. The latter organisations have a fairly limited set of aims.
They must continue to make a profit for their shareholders, subject only to constraints imposed on their
operation by government legislation and by ethical considerations. Tradeoffs will need to be made, and the
future will still bring surprises, but developing non-financial measures is merely difficult, not impossible.
With education, particularly tertiary, it is possible that only the support functions can be organised to
allow meaningful measurement. Of course, tertiary education could be regarded as primarily a type of
investment, bearing individual and social rates of return (Hammersley, 1995). Performance measurement
would then be appropriate, but viewing tertiary education this way amounts to an implicit and
unacknowledged change in the traditional aims of university education.

Conclusions

The difficulties in applying performance measurement vary from organisation to organisation, in general
being easier for business and nonprofit organisations than for public sector agencies. This paper has
examined both the benefits and difficulties of using performance measurement for public sector services.
Two conditions must be met for successful application. First, it is necessary that a coherent set of aims can
be articulated for the organisation, and be acknowledged as valid by the various stakeholders. If
externalities are pervasive in the organisation’s sector, as is true in health, education and transport,
difficulties with defining a coherent set of objectives are likely to be experienced. These difficulties seem
set to worsen as societies become more heterogeneous, and as environmental/ resource problems gain
more public attention.

The second condition is that neither the definition of the customers, nor the articulation of their
preferences, must present insuperable difficulties. However, in many government sector agencies, the
claims of other stakeholders may conflict with the claims of those usually regarded as the primary
stakeholders. Even these may be ambivalent about their wants. For these reasons, the notion of customer is
difficult to carry over from the private sector where the person receiving and paying for the service are
usually the same.

6
If both these conditions are met, performance measurement in the public sector should present no more
inherent difficulties than in other organisations, and should bring similar improvements if properly
managed. Centrelink illustrates that performance measurement can be successfully used in the public
sector. Its success is probably as a result of its ability to put aside the difficulties in defining a coherent set
of objectives which its client agencies must inevitably face. This means, in effect, that Centrelink is able
to function in a similar manner to private sector organisations. Like them, it can leave the deeper questions
as to what constitutes a good society to others.

References

Australian Bureau of Statistics (2000). “Government Financial Estimates 2000-01” Commonwealth of


Australia, Canberra, p.53.

Bakhtairi, A.M.S. (2002). “2002 to see birth of New World Energy Order”, Oil and Gas Journal January 7,
pp.18-19.

Birnbaum, R. (2000). “Management Fads in Higher Education”, Jossey-Bass, San Francisco, p.105.

Blundell, B. and Murdock, A. (1997). “Managing in the Public Sector”, Butterworth-Heinmann, Oxford,
U.K., pp.229-249.

Cameron, K.S. and Thompson, M. (2000). “The problems and promises of total quality management:
implications for organizational performance”. In: (R. E. Quinn, R. M. O’Neill and L. St. Clair (Eds))
“Pressing Problems in Modern Organizations”, pp.215-242, Amacom, New York.

Carlin, T. (1999). “Simplifying corporate performance measurement”, Australian CPA December, pp.48-
50.

Castellano, J.F. and Roehm, H.A. (2001). “The problems with managing by objectives and results”,
Quality Progress March, pp.39-46.

Centrelink (2001). “Annual Report 2000-01”, Centrelink, Sydney.

Eccles, R.G. (1991). “The performance measurement manifesto,” Harvard Business Review Jan.-Feb.,
pp.131-137.

Evans, J.R. and Lindsay, W.M. (2002). “The Management and Control of Quality” 5/e, South-Western
College Publishing, Cincinnati, Ohio, pp.115-128.

Forbes, D. P. (1998). “Measuring the unmeasurable: empirical studies of nonprofit organizational


effectiveness from 1977 to 1997,” Nonprofit and Voluntary Sector Quarterly Vol. 27, No. 2, pp.183-
202.

Gibson, H. (1990). “Eastern Arterial Road and Ringwood Bypass Panel of Review”, Report to the
Victorian Government.

Hammersley, M. (1995). “The Politics of Social Research”, Sage Publications, London, pp.145-162.

Handler, A., Issel, M. and Turnock, B. (2001). “A conceptual framework to measure performance of the
public health system,” American Journal of Public Health Vol. 91, No. 8, pp.1235-1239.

Jennings, B.M. and Staggers, N. (1999). “A provocative look at performance measurement,” Nursing
Administration Quarterly Vol. 24, No. 1, pp.17-30.

Jones, G. (2001). “Performance management”. In: (C. Aulich, J. Halligan and S. Nutley (Eds.)) Australian
Handbook of Public-Sector Management, pp.124-137, Allen and Unwin, Sydney.

Juchau, R. (2000). “Non-financial performance measures: an Australian survey,” Charter Vol. 71, No. 2,
pp.48-50.

7
Juran J. M. (1992). “Juran on Quality by Design”, The Free Press, New York, pp.35-36.

Kaplan, R.S. (2001). “Strategic performance and management in nonprofit organizations” Nonprofit
Management and Leadership, Vol. 11, No. 3, pp.353-370.

Kaplan, R.S. and Norton, D.P. (1996). “Using the balanced scorecard as a strategic management system,”
Harvard Business Review Jan.-Feb., pp.75-85.

Kaplan, R.S. and Norton, D.P. (2001b). “Balance without profit,” Financial Management January, pp.23-
26.

Mace, J. (2001). “Economic perspectives on values, culture and education: markets in education—a
cautionary note”. In: (J. Cairns, D. Lawton, and R. Gardner, (Eds.)), Values, Culture and Education,
Kogan Page, London, pp.67-84.

Mayer, T. and Cates, R.J. (1999). “Service excellence in health care” Journal of the American Medical
Association Vol. 282, No. 13, pp.1281-1283.

Morgan, C. and Murgatroyd, S. (1994). “Total Quality Management in the Public Sector: an International
Perspective”, Open University Press, Buckingham, U.K., pp.181-189.

Neely, A. (1998). “Measuring Business Performance”, The Economist Books, London, pp.1-195.

Rigby, D. (2001). “Management tools and techniques: a survey,” California Management Review Vol. 43,
No. 2, pp.139-160.

Sower, V., Duffy, J., Kilbourne, W., Kohers, G., and Jones, P. (2001), “The dimensions of service quality
for hospitals: development and use of the KQCAH scale”, Health Care Management Review Vol.26,
No. 2, pp.47-59.

Swainson, E. (Ed.) (2000) “Encarta Book of Quotations”, Pan Macmillan, Sydney, p305.

VicRoads (1995) “Review of the Eastern Freeway: Springvale Road to Ringwood”, Internal Report.

Wigley, T. and Raper, S. (2001). “Interpretation of high projections for global-mean warming”, Science
Vol.293, pp. 451-454.

Wilkinson, A., Redman, T., Snape, E. and Marchington, M. (1998). “Managing with Total Quality
Management: Theory and Practice”, Macmillan Press, London, pp.104-7.

Young, J. (1999) “The Exclusive Society: Social Exclusion, Crime and Difference in Late Modernity”,
Sage Publications, London, pp.2-16.

8
Performance Measurement in Public Sector Services: Problems
and Potential
Patrick Moriarty and Damian Kennedy

Department of Mechanical Engineering, Monash University-Caulfield Campus


900 Dandenong Rd, Caulfield East, Victoria 3145, Australia.

Biographies

Dr. Patrick Moriarty teaches and researches in the Department of Mechanical


Engineering, Monash University-Caulfield Campus. His research interests include
energy and transport issues, land use policy and more recently, management theory,
and the impact of the new Information Technology on organisations and their
practices. He has taught Engineering Management courses for the last 20 years.

e-mail: patrick.moriarty@eng.monash.edu.au
phone:+ 61 3 9903 2584

Dr. Damian Kennedy is Deputy Chairman of the Department of Mechanical


Engineering, Monash University and Head of the Caulfield Campus section,
specialising in the area of Industrial Engineering and Engineering Management. His
research interests include the utilisation and productivity of resources, and the
strategic design of productive systems. His main teaching interests lie in the area of
Engineering Management.

e-mail: damian.kennedy@eng.monash.edu.au
phone:+ 61 3 9903 2175

Keywords: Centrelink; performance measurement; public sector services; tertiary


education; externalities; public health; highway authorities.

9
10

You might also like