Professional Documents
Culture Documents
Swindell 2000
Swindell 2000
Swindell 2000
Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at .
http://www.jstor.org/page/info/about/policies/terms.jsp
.
JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of
content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms
of scholarship. For more information about JSTOR, please contact support@jstor.org.
M.E. Sharpe, Inc. is collaborating with JSTOR to digitize, preserve and extend access to Public Performance
&Management Review.
http://www.jstor.org
DAVID SWINDELL
ClemsonUniversity
JANET M. KELLY
Universityof Tennessee,Knoxville
PerformanceMeasuresas ObjectiveIndicators
of Effectiveness:The CausalFallacy
Thebestperformancemeasuresarevalid,reliable,understandable, timely,resistant
to perversebehavior,comprehensive,nonredundant, cost-sensitive,program-specific,
and focused on aspectsof performancethatarecontrollable(Ammons, 1996;Hatry,
1980; Wholey & Hatry,1992). Genericor "cookbook"performancemeasuresrarely
meet the managerialneeds of the organization(Decker& Manion, 1987) and never
reflectthe changingenvironmentor expectationsfor the organization(Glaser,1994).
But even as performancemeasurementbecomesinstitutionalized,local governments
have not typicallymoved beyondgenericformulations(Glaser,1994).
Thedistinctionbetweengenericandprogram-specific measuresis not lost on local
administrators. A recentsurveyof city and stateauditorsfoundrespondentssatisfied
with the inputand outputdatacollected as a partof theirperformancemanagement
system,butnot so satisfiedwiththeirmeasuresof programeffectivenessor efficiency
(Garsombke& Schrad,1999). One might concludethatperformancemeasurespro-
vide useful informationfor managersbut do not usuallycapturethe effectivenessof
the programsto whichtheymay be applied,especiallyif themeasuresarestaticas the
programchangesto meet changingdemands(Affholter,1994). Furthermore, the lim-
ited use of these measuresfor decisionmakingcan thenbe explainednot so muchby
resistanceof local managersto be held accountablefor results as by a manager's
understandingthatwhat is being measuredmay not be a suitablebasis for decision
making.
Mostpublicprogramsandservicescannoteasily quantifysuccesses.Onecanmea-
sure"tonsof garbagepickedup"butone cannotmeasure"crimespreventedby neigh-
borhoodpolicing."Proxymeasuresareusuallydevelopedto tryto capturecertainpro-
gram outputs,and those proxy measuresare assumedto be correlatedwith actual
programoutcomes.Publicemployeesarethusinducedto maximizethe proxy being
measuredandmanagersareheld accountablefor programresultsbasedon how they
performedon thoseproxies.Thefirstproblemwiththisapproachis thatthecorrelation
betweenthe proxyandthe outcomemaybe weak.The secondproblemis thatcorrela-
tion is not causation(Drebin,1980).
Furthermore,informationabout outcomes does not reveal how those outcomes
were achieved.
Unfortunately, manypublicemployees,electedofficials,andmediapeoplebelievethat
regularlycollectedoutcomeinformation [meansthat]thegovernment programandits
staffweretheprimary causesof theoutcomes....Outcomesinformation providesonlya
score. . . whetheroneis winningorlosingandto whatextent... butit doesnotindicate
why.(Hatryet al., 1994,p. 17)
Comparative Performance
Measurement: The ICMA Project
It is not surprising, given these limitations, that local governments approach
benchmarking,or comparingtheirperformancedatawith otherlocalities, with consid-
erable trepidation.Unquestionably,benchmarkingcan lead local managers to ask
importantquestions, like "How does Podunkkeep its solid waste disposal costs so
low?"The answermay leadto a sharingof informationandmanagementpracticesthat
worksto the benefitof the citizens. Of course,the converseis also true.Media,interest
groups, mayors, and councilpersonsmay also ask why their city is paying so much
more for solid waste disposalthanPodunkin such a way thata managementfailureis
implied.Ammons(1999) suggeststhatthe propercontextfor benchmarkingis the rec-
ognitionthatone's city will not be the best at every aspectof service deliveryandthat
officials should approachbenchmarkingwith the idea of learningfromthose who can
performthe service better.This takes optimismand trust,especially when the results
are offered for public inspection.
In 1994, the InternationalCity/CountyManagementAssociation(ICMA)createda
forum for the first comprehensivebenchmarkingproject.A consortiumof city and
county managersgatheredto identify best practicesin local governmentpolice ser-
vices, fire services and emergencymedical services (EMS), neighborhoodservices,
and supportservices. Eachjurisdictionappointeda representativeto a technicaladvi-
sory committeeto work out the details of definingindicatorsand collecting the data.
Datadefinitionproveda dauntingtask.The memberstook 2 yearsto look beyondwhat
performancemeasureswere readilyavailableto those measuresthatwouldbest repre-
sent service quality (Kopczynski& Lombardo,1999). The resultswere mixed. Data
were simply not availablefrom some memberlocalities in the early stages and not
comparablein others.For example,therewas no distinctionbetween direct and indi-
rect costs, and no standardizationof fringe benefits, depreciationrates, or cost-of-
living adjustmentsacrossparticipatinglocal governments(Coe, 1999).
The first report, published in 1996, representedan imperfect but remarkable
achievement. In addition to the quantitativeranking of participatingjurisdictions,
accompanyingeach rankingwas some explanationof why the indicatormight vary
across institutions.High performerswere identified,and informationaboutthe prac-
tices in the high-performingjurisdictions was offered. Finally, the project offered
workshopsto help governmentofficials, elected and appointed,explain performance
measurementto citizens.Therewerealsoworkshopsforcitizenson usingthebenchmarks
Arizona). The cities and county were the initial sites selected from the ICMA
benchmarkingprojectparticipantjurisdictionsthathad done good qualitycitizen sat-
isfaction surveys and were willing to sharethe raw data from those surveys with us.
The criteriafor includinga local governmentin the samplewas thattheircitizen satis-
factionsurveyhadbeen done via a randomsamplingprocess,eithermail or telephone,
using generallyacceptedstandardsof surveyresearch.Self-selected surveyswere not
included, nor were measuresof citizen satisfactionderivedthroughfocus groups or
otherqualitativetechniques.Additionally,each site eitherpublishedormade available
the frequenciesin each categoryof responsefor each question.
The seven service dimensionsincludepolice services, fire services andEMS, road
maintenance,refuse services, streetlighting,parks,and libraries.A two-step process
was used to select the indicatorsfor these service dimensions.First, interviewswith
officials in PrinceWilliamCounty,Tempe,KansasCity, Portland,and Austin helped
identify the differentdataitems thatlocal officials thoughtmost indicativeof service
quality.For instance, police administratorsexplained why clearancerates would be
more indicativeof organizationaleffectivenessthanarrestrates.Fire chiefs explained
why fire suppressionshouldbe measuredin residentialratherthancommercialdwell-
ings (confininga fire to one room of a residentialdwelling is a differentlevel of fire
suppressionthan confininga fire to one room in a commercialwarehouse).
Patternsof reportingdatain the ICMAdatabasehasprovedto be anotherlimitation.
When it was obvious thatvery few observationswerereportedfor a dataitem withina
service category,thatitem was not included.This winnowingsometimes led to rela-
tively few dataitems withina service area.Parkservices arean exampleof this, where
data aggregation across participatingcities made input indicators (net and gross
annualoperatingand maintenanceexpenditureper capita, and full-time equivalents
[FTEs]per 1,000 population)the only viable choices forthis initialanalysis.Thus,the
numberof cases availablefor initialanalysislimits this preliminaryexplorationof the
relationshipsamong citizen satisfactionandperformanceindicators.More items will
be includedfor futureanalysisas additionaljurisdictionsareaddedto the database.A
complete list of the data items in the service categoriesis located in AppendixA.
Method
The most significantchallengeconfrontingresearchalong the lines pursuedin this
article is data availability.Many jurisdictionsemploy either performancemeasure-
ment and benchmarkingtechniques or they employ citizen satisfaction surveys to
measure service quality.It is more rarethat a jurisdictionwill employ both tools to
measureaspectsof qualityfor the same service. Forthe subsetof jurisdictionsthatdo
so, thereis a commensurabilityproblemin thatdifferentjurisdictionstendto develop
theirown measurestailoredto whatthey perceiveas the uniquecharacteristicsof their
delivery system and/orthe particularcircumstancein which they are deliveringser-
vices. Similarly,differentjurisdictionsmight wantto ask citizens theirperceptionsof
the qualityof the EMS, buttheymay ask the surveyquestionsin fundamentallydiffer-
ent ways.
To overcome the incommensurabilityproblemin the performancemeasurement
partof the analysis,we consciously chose to look only at cities thatwere participating
water treatment,or other utilities. Thus, the analysis here is limited initially by the
availabilityof jurisdictionsthatemploy bothperformancemeasuresand citizen satis-
factionsurveysas well as the servicescoveredby thosecities in theiruse of these tools.
At the time of this writing, 13jurisdictionshaveprovidedbothdatasets for enough of
the ICMA-coveredservices to facilitate an exploratoryanalysis of the relationship
between objective and subjectiveservice-qualitymeasures.
With these 13 cases, it was possible to calculate the measures of correlation
between the differentperformanceindicatorsand the respective satisfactionscores
from the surveys. Obviously,citizen surveys rarelycontain multiple questions con-
cerninga given public service. However,the goal of our analysis at this phase was to
help identify the relationshipsbetween a single satisfactionquestionand the various
performanceindicatorsavailablefrom the ICMA indicatorsproject.Both strongand
weak correlationswere expected,dependingon the servicetype.Both findingsareper-
tinentto administrators,especiallyif one views satisfactionmeasuresas an outcomeof
the service deliveryprocess.
After going throughthe ICMA participantcities and finding those that had con-
ducted recent citizen surveys (and securingtheirdatafiles), we had 28 performance
indicatorswith an acceptablenumberof observationsand 14 PTMscores acrossthe 13
cases. Althoughadditionalcities arebeing integratedinto the database,these 13 cases
are used for this initialanalysis.Furthermore,the variousperformancemeasuresused
to evaluateeach service area fall into three categoriesspecified by the ICMA. They
measureservice inputs,deliveryefficiency,andservice outcomes.The relationshipof
each categoryof measurewith citizen satisfactionis also analyzedin this article,with
the expectationbeing that citizen satisfactionwill likely be more closely associated
with the performancemeasuresof outcomethanwith the inputor efficiencymeasures.
Unfortunately,therewere too few cities reportingsolid-wastecollection performance
measuredata.Therefore,thatservice areais not analyzedin this article.
Findings
To begin identifyingthe relationshipsbetweenthe performanceindicatorsand the
citizen satisfactionPTMscores,measureswerepairedby serviceareato calculatesim-
ple bivariatePearsoncorrelations(r). Theresultsarepresentedin each tableandreveal
severaltantalizingpatterns.Some of the correlationresults are not reporteddue to a
lack of cases following the pairwisedeletionof missing data.Only resultswith at least
six observationsarepresented.Due to the small numberof observationsfor each cal-
culationandthe lack of randomizationin case selection, we referto thep value of .10
as a referenceonly.It is not usedin the traditionalstatisticalsense to estimatethe likeli-
hood of a set of observationsoccurringby chance.Eachobjectiveservicemeasurewas
comparedwith the overallcitizen satisfactionratingof city qualityand overall satis-
faction with service value from the survey data. In addition,the service areas were
comparedwith the specificmeasuresof servicequalityrelatedto thatservice area.The
correlationsprovide a measureof the strengthof the associationand directionof the
relationshipsbetweenthe performancemeasuresandtheirrespectivecitizen satisfac-
tion score.
CitizenSurveyMeasures
Performance City Service Police Day Night
Measure Quality Value Quality Safety Safety Enforcement
CitizenSurveyMeasures
PerformanceMeasure City Quality Service Value StreetRepair
Road Costs (I)
r -.12 -.62 .38
p< .38 .09 .20
n 10 6 7
Road Cost/Mile (E)
r -.55 -.83 -.40
p< .06 .02 .21
n 9 6 6
Cost/Good Road (E)
r -.48 .69
p< .14 .06
n 7 6
Good Roads (0)
r .40 -.36
p< .16 .21
n 8 7
Note. (I) = Inputmeasure,(E) = Efficiencymeasure,and (0) = Outcomemeasure.
CitizenSurveyMeasures
PerformanceMeasure CityQuality Service Value LightQuality
LightCosts (I)
r -.42 -.69
p< .11 .06
n 10 6
Light Cost Rate(E)
r -.36 -.19
p< .16 .36
n 10 6
Light Complaints(E)
r .70
p< .06
n 6
Defects (0)
r .63 -.04
p< .05 .47
n 8 6
Replacement(0)
r -.50 .53
p< .07 .11
n 10 7
Note. (I) = Inputmeasure,(E) = Efficiencymeasure,and(0) = Outcomemeasure.
CitizenSurveyMeasures
PerformanceMeasure CityQuality Service Value ParkQuality Recreation
CitizenSurveyMeasures
PerformanceMeasure CityQuality Service Value LibraryQuality
LibraryCosts (I)
r .16 .80 .48
p< .37 .03 .17
n 7 6 6
Costs/ Borrower(E)
r .20 .54 .38
17< .34 .13 .23
n 7 6 6
CirculationRate (0)
r .81 .58 .80
p< .01 .11 .03
n 7 6 6
Borrowers(0)
r .10 .23 .15
p< .42 .33 .39
n 7 6 6
Note. (I) = Inputmeasure,(E) = Efficiencymeasure,and (0) = Outcomemeasure.
Table 6. Bivariate Correlations: Fire Services and Emergency Medical Services (EMS)
CitizenSurveyMeasures
PerformanceMeasure City Quality Service Value Fire Service Quality EMS Quality
APPENDIX A
International City/County Management Association
(ICMA) Benchmarking Project Data Items Used in
Analysis Across Seven Service Dimensions, 1997
POLICE
Inputs
* Expenditurespercapita(PoliceCosts)
* Numberof full-timeequivalent(FTE)staff(swornandcivilian)per 1,000population
(PoliceStaff)
Efficiency
* Numberof violentcrimesclearedperFTEswornstaffmember(ViolentCrimesCleared)
* Numberof propertycrimesclearedper FTEswornstaff member(PropertyCrimes
Cleared)
* Numberof arrestsperFTEswornstaffmember(Arrests)
* Numberof movingviolationcitationsissuedper1,000population
(Tickets)
Outcomes
ROAD MAINTENANCE
Inputs
Efficiency
Outcomes
STREETLIGHTING
Inputs
Efficiency
Outcomes
Inputs
LIBRARIES
Inputs
Efficiency
* Expendituresper registeredborrower(Costs/Borrower)
* Annualcirculationper FTE (Circulation)
Outcomes
* Annualcirculationper capita(CirculationRate)
* Total libraryactivity per capita(Activity)
* Percentageof populationin service areawho are registeredborrowers(Borrowers)
Inputs
EMS Outcomes
* Average time from call entryto arrivalfor calls requiringa basic life supportresponse
(Basic Response)
* Average time from dispatch to arrival for calls requiring an advanced life support
response(AdvancedResponse)
* Percentage of full cardiac arrestpatients delivered to a medical facility with a pulse
(Alive)
Source. ICMA(1998).
APPENDIX B
Citizen Satisfaction Items Used in Analysis
References
Affholter,D. (1994). Outcomemonitoring.In J. Wholey,H. Hatry,andK. Newcomer(Eds.),Handbookof
practicalprogramevaluation(pp. 96-118). San Francisco:Jossey-Bass.
Ammons, D. (1996). Municipalbenchmarks:Assessing local performanceand establishingcommunity
standards.ThousandOaks,CA: Sage.
Ammons,D. (1999). A propermentalityforbenchmarking. Review,59(2), 105-109.
PublicAdministration
Brown,K., & Coulter,P. B. (1983). Subjectiveand objectivemeasuresof police service delivery.Public
AdministrationReview,42(1), 50-58.
Coe, C. (1999). Localgovernmentbenchmarking: Lessonsfromtwo majormultigovernment efforts.Public
AdministrationReview,59(2), 110-123.
Cohen,S., & Eimicke,W. (1998, May9-13). Theuse of citizensurveysin measuringagencyperformance:
Thecase of the New YorkCityDepartmentof Parksand Recreation.Paperpresentedat the American
Society for PublicAdministrationannualmeeting,Seattle,WA.
Decker,L. L., & Manion,P.(1987). Performancemeasuresin Phoenix:Trendsandanew direction.National
CivicReview,76(2), 119-129.
Drebin,A. (1980, December3-7). Criteriafor performancemeasurementin state and local government.
GovernmentalFinance,9, 3-7.
Fisher,R. (1994, September2-8). An overviewof performancemeasurement.PublicManagement(ICMA),
76, 2-8.
Garsombke,H., & Schrad,J. (1999, February).Performancemeasurementsystems:Resultsfroma city and
state survey.GovernmentFinanceReview,15, 9-12.
Glaser,M. (1994). Tailoringperformance Fromgenericto germane.Public
measuresto fit the organization:
Productivityand ManagementReview,17(3), 303-319.
Grizzle,G. (1987). Linkingperformanceto fundingdecisions:Whatis the budgeter'srole?PublicProduc-
tivityReview,10(3), 33-44.
Hatry,H., Gerhart,C., & MarshallM. (1994, September15-18). Elevenways to makeperformancemea-
surementmore usefulto publicmanagers.Public Management(ICMA),76, 15-18.
Hatry,H. P. (1980). Perfonmancemeasurementprinciplesandtechniques:An overviewfor local govern-
ment. Public ProductivityReview,4(4), 312-339.
IntemationalCity/CountyManagementAssociation(ICMA).(1974). Measuringthe effectivenessof basic
municipalservices. Washington,DC: Author.
InternationalCity/CountyManagementAssociation(ICMA).(1996). Comparativeperformancemeasure-
ment:FY 1995 data report.Washington,DC: Author.
InternationalCity/CountyManagementAssociation(ICMA).(1997). Comparativeperformancemeasure-
ment:FY 1996 data report.Washington,DC: Author.
InternationalCity/CountyManagementAssociation(ICMA).(1998). Comparativeperformancemeasure-
ment:FY 1997 data report.Washington,DC: Author.
Kopczynski,M., & Lombardo,M. (1999). Comparativeperformancemeasurement:Insightsand lessons
learnedfrom a consortiumeffort. Public AdministrationReview,59(2), 124-134.
Lineberry,R. L. (1977). Equalityand urbanpolicy: Thedistributionof municipalservices. Beverly Hills,
CA: Sage.
Lovrich,N. P.,Jr.,& Taylor,G. T. (1976). Neighborhoodevaluationof local governmentservices:A citizen
survey approach.UrbanAffairsQuarterly,12(2), 197-221.
Lyons, W. E., Lowery,D., & DeHoog, R. H. (1992). Thepolitics of dissatisfaction:Citizens,services, and
urban institutions.Arming,NY: M. E. Sharpe.
MacManus,S. (1984). Intergovernmental dimensionsof urbanfiscal stress. Publius, 14(1), 1-82.
Melkers,J., & Thomas,J. C. (1998). Whatdo administrators thinkcitizensthink?Administratorpredictions
as an adjunctto citizen surveys.Public AdministrationReview,58(4), 327-334.
Milbraith,L., & Goel, M. (1977). Politicalparticipation:How and whydo people get involvedinpolitics?
(2nd ed.). Chicago:RandMcNally.
Miller,T. I., & Miller,M. A. (1991a). Citizensurveys:How to do them,how to use them,what theymean.
Washington,DC: InternationalCity/CountyManagementAssociation.
Miller,T. I., & Miller,M. A. (1991b). Standardsof excellence: US residents'evaluationsof local govern-
ment services. Public AdministrationReview,52(6), 503-514.
Miller, T. I., & Miller, M. A. (1992). Assessing excellence poorly:The bottom line in local government.
Journal of Policy Analysis and Management,11(4), 612-623.
Nagel, J. (1987). Participation.Englewood Cliffs, NJ: Prentice-Hall.
Ostrom,V., Tiebout,C., & Warren,R. (1961, December).The organizationof governmentin metropolitan
areas:A theoreticalinquiry.AmericanPolitical Science Review,55, 831-842.
Parks, R. B. (1984). Linking objective and subjective measures of performance.Public Administration
Review,44(2), 118-127.
Percy, S. L. (1986). In defense of citizen evaluationsas performancemeasures. UrbanAffairs Quarterly,
22(1), 66-83.
Rosentraub,M. S., & Thompson,L. (1981). The use of surveysof satisfactionforevaluations.PolicyStudies
Journal, 9(7), 990-999.
Sharp,E. (1990). Urbanpolitics and administration:Fromservice deliveryto economicdevelopment.New
York:Longman.
Stipak, B. (1977). Attitudesand belief systems concerningurbanservices. Public OpinionQuarterly,41,
41-55.
Stipak, B. (1979). Citizen satisfactionwith urbanservices: Potentialmisuse as a performanceindicator.
Public AdministrationReview,39(1), 46-52.
Streib,G., & Poister,T. (1998). Performancemeasurementin municipalgovernments.In The1998 munici-
pal yearbook (pp. 9-15). Washington,DC: InternationalCity/CountyManagementAssociation.
Swindell, D. (1999). Cityof Dayton, Ohio: Volume1: Parks,recreation,and culturalaffairs survey,1998.
Dayton, OH: Centerfor Urbanand Public Affairs.
Thompson, L. (1997). Citizen attitudesabout service delivery modes. Journal of Urban Affairs, 19(3),
291-302.
Tiebout, C. (1956). A pure theory of local governmentexpenditures.Journal of Political Economy,44,
416-424.
Verba,S., & Nie, N. (1972). Participationin America.New York:Harperand Row.
Verba,S., Scholzman,K., Brady,H., & Nie, N. (1993). Citizen activity:Who participates?What do they
say? AmericanPolitical Science Review,87(2), 303-318.
Walters,J. (1998). Measuringup: Governing'sguide to performancemeasurementfor geniuses [and other
public managers]. Washington,DC: GoverningBooks.
DavidSwindellis an assistantprofessoranddirectoroftheMasterofPublicAdministra-
tionprogramat ClemsonUniversity.His researchfocuseson citizensatisfactionsurvey-
ing and theuse of neighborhoodorganizationsas a mechanismfordeliveringurbanpub-
lic services. His researchhas been published in the Joumal of UrbanAffairs, Public
AdministrationReview, Nonprofit and VoluntarySector Quarterly,and Economic
DevelopmentQuarterly.Contact:dswinde@clemson.edu