Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

Public sector performance measurement: delivering greater accountability

Mike Bolton

Introduction
The world is changing especially the world of business. First, technological advances have ``shrunk'' the globe and a company in China can now compete effectively in Oklahoma in many markets. Second, customer expectations have changed usually they have risen as consumers have become better educated, better informed and more aware of ``what is going on''. They may also have more time to be discerning and they certainly have access to more information with which to make the judgement. These two catalysts for change have had considerable effects on many businesses. Some businesses that dominated their sectors 20 years ago no longer exist overtaken by technological development. Others are no longer major players in their industry, forced out by consumer power. Not surprisingly, we are increasingly seeing these two catalysts having similar effects on public sector services. Technology is certainly sector-independent: many of the business processes that have been improved even transformed by technology also exist in the public sector. It would be surprising if lessons learned (hopefully both the good and the bad) were not transferred into public sector organisations and the same is true of the charitable/not-for-profit sector. Of course, slightly different rules might apply with slightly different regulatory regimes, and slightly different standards. The world expects the public and not-for-profit sectors to be ``more accountable'' with ``public money'' although who puts money into commercial organisations if not the public? This ``slightly different'' environment means that some of the approaches, methodologies, tools and techniques applied within the private sector have to be modified or translated to make them fully relevant for the public sector. Failing to make such adjustments renders any project likely to fail. Although essential, this need for adjustment can be unhelpful it means that consulting and advising organisations often have private sector and public sector teams, or that specific organisations specialise in only one sector. This can mean that all the possible transfer of expertise does not take place and the public sector ``lags'' behind the private in its adoption of new approaches. Of course, the public sector will also claim that it lags behind 20

The author Mike Bolton is Senior Consultant, Explored Futures, Addingham, UK. Keywords Performance measurement, Public sector, Accountability Abstract Expectations of public sector organisations have changed. The public increasingly expects them to have private sector performance focus but public sector accountability. Discusses the particular issues of establishing performance measures (and a performance-related culture) in the public sector while demonstrating (as well as delivering) ``proper'' and efficient use of public funds. Electronic access The Emerald Research Register for this journal is available at http://www.emeraldinsight.com/researchregister The current issue and full text archive of this journal is available at http://www.emeraldinsight.com/0043-8022.htm

Work Study Volume 52 . Number 1 . 2003 . pp. 20-24 # MCB UP Limited . ISSN 0043-8022 DOI 10.1108/00438020310458697

Public sector performance measurement

Mike Bolton

Work Study Volume 52 . Number 1 . 2003 . 20-24

due to a shortage of available resources and the ``truth'' will probably involve a combination of both these and other factors.

Public sector mission


All public sector agencies exist to fulfil a particular mission a mission inherently determined by society at large, though articulated and managed by a combination of elected officials and their executive support teams. Their authorisation to conduct ``their business'' (and to use public funds to do so) comes directly from the ballot box, or is delegated by a body whose standing is enshrined constitutionally. ``Competition'' is a factor for some agencies which provide services in competition with private sector organisations, but for many public sector agencies, the concept of competition does not exist indeed, for many agencies, the private sector is specifically barred from offering competitive services. Competition is therefore rarely a driving force for change, for performance improvement. The ``critical success factor'' for a public sector organisation is therefore the degree to which it fulfils its mission. In fulfilling the mission, it is, however, expected to address a number of other success factors in terms of being efficient (and accountable for public funds), and in terms of offering satisfaction to customers/clients covered by the specific mission. Some of these success factors are formally monitored by other government agencies (inspectors, auditors, etc.) and some are ``monitored'' by the public who receive the services (and who may exercise ``control'' via the ballot box). Some of these factors look superficially like the success factors for private sector organisations, although in practice there are sufficient environmental (and cultural) differences, to make significant distinctions. ``Customer satisfaction'' for example is part of the agenda for both sectors but is addressed quite differently in each since the suppliercustomer relationship itself is fundamentally different. Increasingly, in recent times, public sector organisations have been encouraged even ``forced'' into adopting private sector (or pseudo private sector) performance improvement methodologies to demonstrate their accountability. Thus, in the UK, local 21

government agencies have been subject to the regime of ``compulsory competitive tendering'' and, more recently, to the drive for ``best value''. US agencies have been subject to the demands of the Government Performance and Results Act (GPRA) of 1993. The GPRA seems to suggest that budgets will be determined based on performance as evaluated by the Office of Management and Budget (OMB) (Daniels, 2002). This, however, is one of those ideas that is ``fine in theory''. Most government agencies have budgets that are determined by the nature of the mission, the activities required to carry out that mission, and sometimes most important of all the place of those activities on the current political agenda. If education is deemed important to voters, then education budgets tend to rise. Similarly, if ``law and order'' emerges on the political radar, budgets for police authorities rise. There will, of course, be lots of talk about performance and even specific funded initiatives to address under-performance or performance improvement but essentially the budget is performance-independent. A difficult question for both politicians and for society generally (remember, it is ``society'' in the form of taxpayers that funds these organisations) is: ``If we need these organisations, because we want them to deliver their agreed missions, how do we ensure effective performance? How do we know we get value-for-money?'' Part of the remit of the GPRA is to determine ``the effectiveness and efficiency of the agency's authorized work''. This assumes that the agency is ``doing the right thing''; its mission is determined and at least in broad terms unquestionable. The task, therefore, is to find out if it is doing it well, and doing it cost-effectively. There must, therefore, be an assessment process, a measurement and monitoring regime, to identify this.

Comparisons with private sector organisations


Of course, we must not forget where we started. Public sector organisations are subject to many of the same changes as private sector ones. They also have some of the same opportunities. Many public sector agencies are large lots of people, lots of

Public sector performance measurement

Mike Bolton

Work Study Volume 52 . Number 1 . 2003 . 20-24

equipment. The people have to be recruited, inducted, trained, appraised, developed, monitored; the equipment has to be specified, purchased, installed, maintained. These are activities that can be carried out in the public sector in ways that are similar to the private sector. There are, however, cultural differences and words such as ``principle'' come to the fore. Take an example. Of course, in public sector organisations, purchasing should must be effective. The organisation should get good value for the goods it buys. This means that inevitably there will be a (complex?) set of rules that determine who buys what and how. There will probably be financial limits set for when purchasing must be competitive, when tendering should be undertaken, when collaborative purchasing schemes must be used, etc. These rules will almost certainly be slavishly followed even where they patently add cost rather than value because they demonstrate accountability! As an example of this, in a public sector organisation key workers were given mobile phones. There was a legitimate reason since these workers are often away from their desks but may need to be contacted. The solution was to give each of them a mobile phone at the organisation's expense but to introduce a bureaucratic regime that required them to log at the end of each month any private calls they made on the phone, and to have the costs of those calls deducted from salary. This satisfied the operational need and the need for accountability. It was, however, an expensive solution especially as most of the contact was one-way. Rarely (although sometimes), did the workers have to initiate a call. One of the workers suggested a cheaper alternative. If the organisation gave them each a Pay-As-You Go mobile phone (no monthly rental but higher call charges) and allowed them to use one call voucher a month the monthly cost of each phone to the organisation would reduce by approximately 50 per cent and all the bureaucracy costs would disappear. However, the small number of private calls would go unrecorded and unpaid for. This suggestion was not taken up because it did not meet the need for accountability; it was felt that if a member of the public found out that these workers could make private calls on a public phone, there would be an outcry, even though the amount 22

of calls was regulated by the value of the call voucher. Any benchmarking of the costs of such mobile phones (and the service that used them) against a private sector organisation (that almost certainly would make a pragmatic decision to ``lose'' the cost of private calls for the greater benefit) would show a less efficient organisation. Yet it is the ever-increasing demands for accountability of public expenditure that directly give rise to such ``inefficiency''. This concept of cross-sector benchmarking has been developed, because even allowing for the cultural differences expressed above there are services that are generic across sectors. Increasingly, public sector organisations will use a framework such as the Baldrige award (in the USA) or the European Quality Framework (in Europe) to measure themselves against both their sector peers, and against organisations in another sector.

A performance measurement regime


We can thus start to arrive at a set of (more carefully constructed) questions that might guide a performance measurement regime for a public sector organisation. As before, we can take the mission as a ``given'' for most public sector agencies (although there are often smaller, subagencies set up as part of a specific political initiative whose very existence mission should be reviewed). This means that the organisation should be addressing the questions of: . How well does it fulfil its mission? How does it know this? . How efficiently does it support that mission? . How does it compare to other organisations? To the best? . How does it report its achievements to its stakeholders? . How does it get feedback from those stakeholders? Of these, the hardest to answer is the first how well does it perform its mission. This is because: . of the lack of ``competition'' there is little to measure against; . mission is long-term; . mission may be political;

Public sector performance measurement

Mike Bolton
.

Work Study Volume 52 . Number 1 . 2003 . 20-24

it is often easy to identify excuses for unfulfilment.

With a mission that has no direct private sector comparison, two of the few opportunities for measurement and comparison lie with: . Comparison of actual against target performance. If the agency is effective, its mission will have been translated into a series of goals and targets for a given time period. The first measure of mission effectiveness is to assess the nature of these goals: are goals realistic, challenging, and clearly aligned with the mission, and are targets appropriate, achievable, and measurable? If so, then comparison of actual performance with this target performance does offer a reasonable view of effectiveness. . Time-series measurement. Data can be compared over time. Performance can be ``seen'' to be improving or declining. However, the fourth bullet-point above offers agencies a useful ``get out of jail free'' card. For example, if crime figures are rising against the best endeavours of the local police force the police authority may claim that this is because of ``the general breakdown of discipline in society''. Indeed they may even be achieving increasing rates of arrests and convictions whilst the crime figures soar. Thus they may claim to be doing a ``good job'' in a worsening situation. Some politicians will support this view; others will oppose it. Where lies the truth? Sometimes it is possible to get a clearer view of performance by: . careful selection on comparators/ benchmarking partners; . triangulating a series of complementary measures. For the former, for example, a high school may select for benchmarking purposes a group of high schools in broadly equivalent social catchment areas (identified by some existing measure) rather than the national, or even regional, ``average''. A police authority may select, for comparison, cities of the same size, same ethic mix, same industrial background, etc.. The latter triangulation is more difficult in terms of finding the right measures to triangulate but, for example, ``hard'' 23

performance measures may be triangulated against ``softer'' customer or employee satisfaction measures, to find out if there is a discrepancy (and which way) between ``reality'' and ``perception''. If crime figures are going up, but the public thinks the police force is doing a good job is the organisation effective? Many would argue ``yes''! When we move on to efficiency measures, life gets a lot easier. A print/reprographics department in a large government agency can be directly compared to one in the private sector. The number of prints/copies, the number of staff employed, and other simple measures will give an efficiency measure that is robust and reliable. This is true for many of these generic support services. For stakeholder relationships and communications, the best way to find out about how well the organisation does is to ask the stakeholders. This might be revolutionary for some public sector organisations but here we do have a case where perception is reality. What stakeholders think and feel is the direct measure of success. Too many agencies give stakeholders what the agency thinks is good for them they waste money on ``glossy'' brochures and reports that again at first sight seem to meet the demands of accountability, but, in reality, fail at many levels. Technology changes people's perceptions of how communication should take place. Unfortunately, technology is not ubiquitous some stakeholders/customers will want Web/ e-mail communication; others will not have the ability to receive it. Of course, it may be possible to assume access if it is made widely available through public facilities in libraries, government offices, etc. Then, the Web becomes a useful tool. It offers the opportunity to balance messages in different media providing a lot of backup material on a Web site for those who feel they need it, but offering, say, print messages in a brief and succinct format. (See the Fairfax County Performance Measurement Manual ``Measures up'' at http://www.co.fairfax.va.us/ gov/omb/basic_manual_2002.pdf)

Performance measurement criteria


From the above discussion, and from a general understanding of what makes a good measurement regime, we can distil the

Public sector performance measurement

Mike Bolton

Work Study Volume 52 . Number 1 . 2003 . 20-24

essence of an effective performance measurement regime for a public sector organisation, to attempt to answer the questions we identified earlier. Measures should: . be significant they should measure the key success factors; . offer views from different perspectives; . reflect the concerns of all key stakeholders (see Halachmi, 2002); . be used and considered together, not in isolation; . be balanced between quantitative (``hard'') and qualitative (``soft''); . be discriminating changes in the measure should be significant; . be unobtrusive collection of measurement data should not disrupt primary tasks. Astute and informed readers will see that there are clear opportunities to use a ``balanced'' measurement methodology such as that conceptualised in the Balanced Scorecard. This includes four dimensions or categories of measurement designed to obtain a balanced view of organisational performance. This suggests that measures should be applied to (Amaratunga et al., 2001): . finances . customers; . internal business processes; . measures of innovation and learning. The theory is that this balanced view helps assess both current and potential organisational well-being. In drawing up a ``basket of measures'' (whether using the Balanced Scorecard or some other framework), a collaborative process is essential so that the resulting measures are ``owned'' by those subjected to them. This collaboration must then permeate the subsequent monitoring and evaluation process, so that those involved get regular feedback of performance and progress. If communication and feedback are regular and appropriate, the measurement process is much more likely to be constructive seen as performance enhancing rather than as a compliance and punishment regime. All performance measurement must be treated with both respect and caution, and it is essential that the development of these measures is not seen as being only for 24

``accountability'' for simply justifying what has been done. The measures should tell us something hopefully something important about what has been done, but they do not tell us everything. They do not necessarily detail the causes of good or bad performance; these may require more ``detective'' work. Poor performance (as indicated in a performance measure) does not necessarily indicate poor effort, poor skill, poor co-ordination, etc. . . . it simply raises the issue for investigation. The more comprehensive the overall set of measures, the more likely it is that underlying causes can be simply identified. And that perhaps will require a change in attitude towards such measures.

Conclusion
If we treat performance measurement seriously, and make the appropriate translations of private sector practice to the public sector we can deliver both: . better services, because managers get improved information with which to perform their management control functions and take decisions; . greater accountability, through better reporting in ways which encourage stakeholders to take a greater interest in, and have a better understanding of, their service agencies and the performance of them. To achieve this, we need to look at a balanced set of appropriate, significant measures choosing them carefully so that we can look up, down, across, along and through the organisation!

References
Amaratunga, D., Baldry, D. and Sarshar, M. (2001), ``Process improvement through performance measurement: the Balanced Scorecard methodology'', Work Study, Vol. 50 No. 5, August. Daniels, M. (2002), ``Memorandum for heads of executive departments and agencies: planning for the president's fiscal year 2004 budget request'', available at: www.whitehouse.gov/omb/ memoranda/m02-06.pdf Halachmi, A. (2002), ``Performance measurement: a look at some possible dysfunctions'', Work Study, Vol 51 No. 5, August.

You might also like