Swindell 2000

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 24

Linking Citizen Satisfaction Data to Performance Measures: A Preliminary Evaluation

Author(s): David Swindell and Janet M. Kelly


Source: Public Performance & Management Review, Vol. 24, No. 1 (Sep., 2000), pp. 30-52
Published by: M.E. Sharpe, Inc.
Stable URL: http://www.jstor.org/stable/3381075 .
Accessed: 20/07/2013 03:02

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at .
http://www.jstor.org/page/info/about/policies/terms.jsp

.
JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of
content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms
of scholarship. For more information about JSTOR, please contact support@jstor.org.

M.E. Sharpe, Inc. is collaborating with JSTOR to digitize, preserve and extend access to Public Performance
&Management Review.

http://www.jstor.org

This content downloaded from 139.184.30.136 on Sat, 20 Jul 2013 03:02:09 AM


All use subject to JSTOR Terms and Conditions
LINKINGCITIZEN
SATISFACTIONDATATO
PERFORMANCEMEASURES
A PreliminaryEvaluation

DAVID SWINDELL
ClemsonUniversity
JANET M. KELLY
Universityof Tennessee,Knoxville

A dvocatesof citizensurveysassertthattheideaof surveyingcitizensis notcontrary


to representativedemocracyor a firststep on the slipperyslope towarddirectdemoc-
racy,butan exercisein "deliberativedemocracy"whereelectedrepresentativesbring
citizens into the decision-makingprocess(Miller& Miller, 199la, p. 7). This kindof
directinputintothepoliticalprocesscanenhancecommunityparticipationin decision
making,if notcommunitycontroloverservicedecisions.Contraryto theexpectations
of those who believecitizensdislikeanddistrustgovernment,citizensgive theirlocal
governmentsgenerallyhigh marksfor service delivery(Miller& Miller, 1992). It is
not so much,it seems,thatcitizensdo not trustgovernment,butthatmanyadministra-
tors do not trustcitizens to renderfairjudgmentsaboutgovernment.A recentstudy
revealedthat administratorsin Atlantamunicipalgovernmentbelieved thatcitizens
wouldratecity servicesmuchlowerthantheyactuallydid(Melkers& Thomas,1998).
Those opposed to the use of citizen surveys in public policy, many of whom are
groundedin thepublicadministration literatureof bureaucratic
professionalism,argue
that governmentprofessionalsare uniquely qualifiedto assess service quality and
quantity.Sometimes characterizedas "technocrats"(Lovrich& Taylor,1976), this
camprelies on objectiveinternalmeasuresof service qualitylike performancemea-
sures or benchmarksas opposed to citizen evaluations,which are unquestionably
subjective.
And so the two sides of the serviceevaluationdebateareengaged.Whatis missing
from the debateis a quantitativeevaluationof how these two tools of servicequality
evaluationintersect.Proponentsof performancemonitoringassume thatpublic ser-
vice professionalsknowwhatconstitutesgood performanceandcan measureit accu-
rately.Proponentsof citizensatisfactionsurveyingassumethatthepubliccanevaluate
servicequalityeven if theyhavehadno directcontactwiththeserviceprovider.Advo-

Public Performance& ManagementReview,Vol. 24 No. 1, September2000 30-52


C)2000 Sage Publications,Inc.
30

This content downloaded from 139.184.30.136 on Sat, 20 Jul 2013 03:02:09 AM


All use subject to JSTOR Terms and Conditions
Swindell, Kelly / CITIZENSATISFACTIONDATA 31

cates for both approacheshavelittle informationabouthow, or whether,betteradmin-


istrativeperformanceresultsin a higherlevel of citizen satisfaction.
This researchwill explore that question across seven categories of local govern-
ment service delivery(police service,fireandemergencymedicalservices,roadmain-
tenance,refuse services, streetlighting,parksandrecreationservices, andlibraries)in
13 local governments.Othereffortsto analyzecitizen ratingsof performancequality
with objective benchmarksderived by public service professionalshave been case
studiesin one locality.Expandingthe analysisto multiplesites with standardizedper-
formance and satisfactiondata seems a properand logical next step, but there is no
consensus on methodologicalissues for multisitesamples.Althoughour sample size
of 13 is not sufficientlylarge to yield decisive results,the foundationfor a methodol-
ogy for comparisonof citizen satisfactionsurveys and internalperformancebench-
marksis an importantfirst step, even if we must treatthe results with caution.
Such an analysismighteventuallyshed light on a debatedtheoryin publicadminis-
tration. In 1956, Charles Tiebout set out a "thoughtexperiment"conceptualizing
metropolitanareas with multiple municipaljurisdictions as a type of marketplace
(Ostrom,Tiebout,& Warren,1961; Tiebout, 1956). Consumers(i.e., citizens) could
approachtheirresidentiallocation decision by choosing the jurisdictionthat offered
the best mix of services andtax bundlesin such a way as to maximize the preferences
of that consumer.This idea runs counter to traditionalargumentsof metropolitan
reformthatseek to minimize metropolitanfragmentation.Furthermore,the idea that
consumer-citizenscould "votewith theirfeet"in this marketplacewas seen as unreal-
istic due to informationasymmetries.In otherwords,citizensdo not havethe informa-
tion aboutservice qualityandtax bundlesto engage in the calculusfamiliarto the pri-
vate marketplace.Melkersand Thomas(1998) arguethatadministratorswho cannot
predictpublicpreferencesareill equippedto makedecisions aboutwhatcitizens want
or need (p. 328). In this article,we explore the possibility that citizens may be better
able to evaluatetheirpublic services thancommonly thought.

Citizen Satisfaction Surveys


Therearemanyways citizenscanparticipatein local decisionmaking.The conven-
tional forms of political participation(e.g., voting, campaigning,joining a neighbor-
hood orinterestgroup)arestill essentialto local governance.Inrecentdecades,oppor-
tunities for more directinput have been expandedby mandatorypublic hearingsfor
certainlocal issues from zoning to solid waste disposal, to libraryholdings, to bond
issues. These efforts attemptto include the interestedor affectedpublic. Citizen sur-
veys arefundamentallydifferent,measuringgeneralpublic opinionand attitudesand
offering public officials potentiallyuseful policy inputinformation.For example, in
Auburn,Alabama,citizen satisfactionresultsinfluencedfundingpatternsfor city gov-
ernment(Watson,Juster,& Johnson,1991). A surveyof randomlychosen citizens is a
means of overcomingthe well-documentedsocioeconomicbias associatedwith other
forms of citizen participation(Milbraith& Goel, 1977; Nagel, 1987; Verba& Nie,
1972;Verba,Scholzman,Brady,& Nie, 1993). But suchrandomsurveysundoubtedly
capturerespondentsthat may or may not be informedby substantialknowledge or

This content downloaded from 139.184.30.136 on Sat, 20 Jul 2013 03:02:09 AM


All use subject to JSTOR Terms and Conditions
32 PPMR/ September2000

experiencewiththe activitiesof local government.Andthereinlies themajorcriticism


of this tool.
Citizen surveys do not measureunderstandingof governmentservice provision
arrangementsorevenexperiencewithcertainservices.Theassumptionbehindcitizen
surveyingis thatcitizenscan makeinformedjudgmentsaboutthe serviceeven if they
do notpersonallyreceivethe service.MillerandMiller(199 la) pointoutthata survey
does

give voice to all typesof citizens,thepooreras well as thebettereducatedresidents,those


whosehealthmaykeepthemfromattendingmeetingsandthosein betterhealth,shy peo-
ple andoutgoingpeople, newcomersandold timersandthose who have a dispassionate
point of view as well as those who areemotionallyinvolved.(p. 8)

Citizensurveys,then,areratherlike voting.The well informedcast theirvote along-


side the not-so well informedandthe publicimputesvalidityto the outcome.
In additionto the concernthatcitizens with no experienceare unableto evaluate
servicequality,thereis an acceptedbodyof workthatalso suggeststhatcitizens'atti-
tudestowardtheirlocal governmentor towardlocal governmentservicesareaffected
by a set of factorsthatarenotservicerelated.Thesefactorsfall intothreebroadcatego-
ries. First,citizens'raceandincomemay influencetheirevaluationof servicequality
andquantity(see Brown& Coulter,1983;Stipak,1977). Specifically,AfricanAmeri-
cans andHispanicsmayrateurbanservicesconsistentlylowerthanAnglo-American
residentsmay.Second,neighborhoodcharacteristics mayleadcitizensto evaluateser-
vices differently(see Lineberry,1977).Finally,characteristics of thelocalgovernment
itself may affect citizen satisfaction &
(see Lyons, Lowery, DeHoog, 1992). Con-
trollingfor these factorsis difficultin thattheyareinterrelated-neighborhoodsmay
reflect racialaggregation,which may be closely linkedwith income, which may be
affectedby the characteristicsof thejurisdictionitself (i.e., a decayinginnercity). A
comprehensivemodelof all threetypesof satisfactionfactorssuggeststhatthe morea
citizen is investedin his or her communityandfeels efficacioustowardlocal govern-
ment,the greaterthe satisfactionwithservices,regardlessof individualandneighbor-
hood characteristics(Lyonset al., 1992).

Perception Versus Reality:


Citizen Attributes and Attitudes
What is the relationshipbetween what governmentis doing and what citizens
believe governmentis doing?Thatansweris unclear,butthe types of errorsa citizen
might make in evaluatinglocal services have been quite well specified.First, there
may be errorsof attribution.A citizenmay believe thata governmentaljurisdictionis
deliveringa servicethatit is notdelivering,ormaybelieveajurisdictionis not deliver-
ing a service when it is. Attributionerrorcan occurin morefragmentedmetropolitan
regionswhereservicesmaybe providedby a numberof general-purpose governments,
specialdistricts,orprivatecontractors.Thus,attributionerrorcanoccurin referenceto
bothprovisionandproductionaspectsof servicedelivery(for a detaileddiscussionof
attributionerror,see Lyonset al., 1992). A recentsurveyin the Detroitmetropolitan

This content downloaded from 139.184.30.136 on Sat, 20 Jul 2013 03:02:09 AM


All use subject to JSTOR Terms and Conditions
Swindell, Kelly / CITIZENSATISFACTIONDATA 33

areafound thatcitizens frequentlyattributeservice deliveryto theircity or township,


even when the service is provided by anotherjurisdiction or a private contractor
(Thompson,1997). The same was foundfor parksandrecreationprovisionin Dayton,
Ohio (Swindell, 1999).
The second type of errora citizen might make, assessment error,is our primary
interest.Here, the citizen evaluatesthe quality of services in a way that contradicts
some objectivemeasureof service quality,like a performancemeasure.Is the likeli-
hood of assessmenterrorin citizen evaluationseriousenoughto call into questionthe
use of surveyingtechniques?Some say yes, especially when the questionsaboutser-
vice qualityarespecific andtied to personalexperience."Responsesto vague satisfac-
tion or evaluationquestionsprobablyreflect at best some unknownmixtureof differ-
ent aspects of service provision"(Stipak, 1979, p. 51). A study of police service in an
Alabamacity led one authorto concludethatthereis no link between actualquantity
andqualityof service providedandcitizen perceptionsof service quantityandquality
(Brown & Coulter, 1983). "The general tendency . . . [is] to attributedifferences
between citizen perceptionsand agency recordmeasuresto erroneousperceptionson
the partof citizens" (Percy, 1986, p. 67). Yet, caution is urgedbefore all disparityin
measuresis blamedon faulty citizen perceptions.
Otherattemptsat linkingsubjectiveandobjectivemeasuresof servicequalityyield
more optimistic conclusions. Parks (1984) conceded that citizens' perceptionsand
own experiencesdo affect theirevaluationof municipalservices, butthata change in
some objectivemeasure(like service quantity)can affect citizens' subjectivepercep-
tions of service effectiveness.A comparisonof city employees' evaluationsof street
conditionswith citizen evaluationsindicatedthatcitizens couldmakeaccurateevalua-
tions, especially when the multiple,specific dimensionsof the services arepresented
to citizens (Rosentraub& Thompson, 1981). Percy (1986) demonstratedcongruence
between citizen perceptionof police responsetime and actualresponsetime as mea-
suredby the agency (one of the more reliablemeasuresof police service quality).A
surveyby the New YorkCity Parksand RecreationDepartmentrevealedthatcitizens
perceivedimprovementsin parkcleanliness and safety,but only parkusers were sur-
veyed (Cohen & Eimicke, 1998).

An Overview of Performance Measurement


The notion of measuringprogrameffectivenessis hardlynew, but the reinventing
governmentmovementof the 1980s and 1990s andits emphasison programoutcomes
made performancemeasurementa commonpracticein local governmentadministra-
tion. There is little evidence, however, that the performancemeasures have much
impact on administrativepractices.Recent estimates indicatedthat less than half of
cities with a populationof more than25,000 use performancedata,and only 23% of
the cities using performancedatahad centralized,citywide performancemonitoring
systems (Streib& Poister, 1998). Most municipaldepartmentsmonitortheir perfor-
mance overtime, butless than20%comparetheirperformancewith othermunicipali-
ties (Streib & Poister, 1998). Despite the proliferationof how-to manualsoffered by
professionalassociations,relativelyfew cities move beyondthe compilationof work-

This content downloaded from 139.184.30.136 on Sat, 20 Jul 2013 03:02:09 AM


All use subject to JSTOR Terms and Conditions
34 PPMR/ September2000

load data to real measuresof programeffectiveness.Examinationof actualperfor-


mance documentsreinforces the conclusion that cities are likely to measure the
aspectsof theirprogramsthatareamenableto measurement,notthemoredifficultand
demandingtask of measuringprogrameffectiveness (Grizzle, 1987; MacManus,
1984). In fact, Fisher(1994) estimatedthat75% of organizationsthatcollect perfor-
mancedatado not use theresultsfor decisionmaking.Why wouldlocal governments
takethe timeandspendthemoneyto developandimplementperformancemeasuresif
theydo not intendto use them?The answermay lie in the "me,too"natureof current
trendsin local administration,especiallywhenthosetrendscarrythe auraof increased
efficiencyandprofessionalization(Ammons,1996;Hatry,Gerhart,& Marshall,1994;
Walters,1998).

PerformanceMeasuresas ObjectiveIndicators
of Effectiveness:The CausalFallacy
Thebestperformancemeasuresarevalid,reliable,understandable, timely,resistant
to perversebehavior,comprehensive,nonredundant, cost-sensitive,program-specific,
and focused on aspectsof performancethatarecontrollable(Ammons, 1996;Hatry,
1980; Wholey & Hatry,1992). Genericor "cookbook"performancemeasuresrarely
meet the managerialneeds of the organization(Decker& Manion, 1987) and never
reflectthe changingenvironmentor expectationsfor the organization(Glaser,1994).
But even as performancemeasurementbecomesinstitutionalized,local governments
have not typicallymoved beyondgenericformulations(Glaser,1994).
Thedistinctionbetweengenericandprogram-specific measuresis not lost on local
administrators. A recentsurveyof city and stateauditorsfoundrespondentssatisfied
with the inputand outputdatacollected as a partof theirperformancemanagement
system,butnot so satisfiedwiththeirmeasuresof programeffectivenessor efficiency
(Garsombke& Schrad,1999). One might concludethatperformancemeasurespro-
vide useful informationfor managersbut do not usuallycapturethe effectivenessof
the programsto whichtheymay be applied,especiallyif themeasuresarestaticas the
programchangesto meet changingdemands(Affholter,1994). Furthermore, the lim-
ited use of these measuresfor decisionmakingcan thenbe explainednot so muchby
resistanceof local managersto be held accountablefor results as by a manager's
understandingthatwhat is being measuredmay not be a suitablebasis for decision
making.
Mostpublicprogramsandservicescannoteasily quantifysuccesses.Onecanmea-
sure"tonsof garbagepickedup"butone cannotmeasure"crimespreventedby neigh-
borhoodpolicing."Proxymeasuresareusuallydevelopedto tryto capturecertainpro-
gram outputs,and those proxy measuresare assumedto be correlatedwith actual
programoutcomes.Publicemployeesarethusinducedto maximizethe proxy being
measuredandmanagersareheld accountablefor programresultsbasedon how they
performedon thoseproxies.Thefirstproblemwiththisapproachis thatthecorrelation
betweenthe proxyandthe outcomemaybe weak.The secondproblemis thatcorrela-
tion is not causation(Drebin,1980).
Furthermore,informationabout outcomes does not reveal how those outcomes
were achieved.

This content downloaded from 139.184.30.136 on Sat, 20 Jul 2013 03:02:09 AM


All use subject to JSTOR Terms and Conditions
Swindell, Kelly / CITIZENSATISFACTIONDATA 35

Unfortunately, manypublicemployees,electedofficials,andmediapeoplebelievethat
regularlycollectedoutcomeinformation [meansthat]thegovernment programandits
staffweretheprimary causesof theoutcomes....Outcomesinformation providesonlya
score. . . whetheroneis winningorlosingandto whatextent... butit doesnotindicate
why.(Hatryet al., 1994,p. 17)

Confusion over real outputand proxy measures,over correlationand causation,and


the role of rationalinformationas a basis for decisionmakingcan haveprofoundman-
agerial and political consequences.

Comparative Performance
Measurement: The ICMA Project
It is not surprising, given these limitations, that local governments approach
benchmarking,or comparingtheirperformancedatawith otherlocalities, with consid-
erable trepidation.Unquestionably,benchmarkingcan lead local managers to ask
importantquestions, like "How does Podunkkeep its solid waste disposal costs so
low?"The answermay leadto a sharingof informationandmanagementpracticesthat
worksto the benefitof the citizens. Of course,the converseis also true.Media,interest
groups, mayors, and councilpersonsmay also ask why their city is paying so much
more for solid waste disposalthanPodunkin such a way thata managementfailureis
implied.Ammons(1999) suggeststhatthe propercontextfor benchmarkingis the rec-
ognitionthatone's city will not be the best at every aspectof service deliveryandthat
officials should approachbenchmarkingwith the idea of learningfromthose who can
performthe service better.This takes optimismand trust,especially when the results
are offered for public inspection.
In 1994, the InternationalCity/CountyManagementAssociation(ICMA)createda
forum for the first comprehensivebenchmarkingproject.A consortiumof city and
county managersgatheredto identify best practicesin local governmentpolice ser-
vices, fire services and emergencymedical services (EMS), neighborhoodservices,
and supportservices. Eachjurisdictionappointeda representativeto a technicaladvi-
sory committeeto work out the details of definingindicatorsand collecting the data.
Datadefinitionproveda dauntingtask.The memberstook 2 yearsto look beyondwhat
performancemeasureswere readilyavailableto those measuresthatwouldbest repre-
sent service quality (Kopczynski& Lombardo,1999). The resultswere mixed. Data
were simply not availablefrom some memberlocalities in the early stages and not
comparablein others.For example,therewas no distinctionbetween direct and indi-
rect costs, and no standardizationof fringe benefits, depreciationrates, or cost-of-
living adjustmentsacrossparticipatinglocal governments(Coe, 1999).
The first report, published in 1996, representedan imperfect but remarkable
achievement. In addition to the quantitativeranking of participatingjurisdictions,
accompanyingeach rankingwas some explanationof why the indicatormight vary
across institutions.High performerswere identified,and informationaboutthe prac-
tices in the high-performingjurisdictions was offered. Finally, the project offered
workshopsto help governmentofficials, elected and appointed,explain performance
measurementto citizens.Therewerealsoworkshopsforcitizenson usingthebenchmarks

This content downloaded from 139.184.30.136 on Sat, 20 Jul 2013 03:02:09 AM


All use subject to JSTOR Terms and Conditions
36 PPMR/ September2000

to communicatewith local officials aboutcommunityneeds. The outreachaspect of


theprojectis anacknowledgmentthatbenchmarking is notforthefaintof heart.When
outcome measuresare used as the basis for political attacksand unflatteringmedia
coverage, the reasonablegovernmentwill conclude thatthe informationthatmight
be gained is not worth the cost in damage to the reputationof the local officials.
Each report acknowledges limitations in the data-technical, explanatory,and
incomplete-and cautionsagainstthe use of resultsto criticize local managers(see
ICMA, 1996, 1997, 1998).

Linking Citizen Satisfaction


and Performance Measurement
Many governmentaljurisdictionsthroughoutNorthAmericause citizen surveys
and many othersuse varioustypes of performancemeasurementinstruments.How-
ever,therearedozensof cities andcountiesthatuse bothinternalperformanceindica-
torsandcitizen satisfactionsurveysto measureservicequality.The ideaof measuring
serviceeffectivenessalong bothsubjectiveandobjectivedimensionscan be tracedto
the earlycollaborationbetweenthe UrbanInstituteandthe ICMA.This effortyielded
the firsthandbookfor measuringperformancein municipalservicedelivery,contain-
ing a chapteron performanceindicatorsandone on citizensurveying(ICMA,1974).A
numberof cities andcountiescurrentlyparticipatingin theICMAbenchmarking proj-
ect are also doing citizen satisfactionsurveys. Their survey instrumentstypically
gauge overallcitizen satisfactionwith city/countyservicesandsatisfactionwith spe-
cific andvisible serviceslike police, fire,EMS,parks,refuse,streetlighting,andother
quality-of-lifemeasures.
Theselocalitiestypicallyreviewtheirperformanceandsatisfactiondataseparately,
but some integratethe data for presentationpurposes.For example,Dayton, Ohio,
comparesits surveyresultsto nationalbenchmarksto determinehowits citizens'eval-
uationsstackup againstnationalaverages.Portland,Oregon,publishesserviceefforts
and accomplishments(SEAs, a type of performancemeasure)by department,and
includes citizen evaluationsof the services alongsidethe performancedata. Prince
WilliamCounty,Virginia,has an ongoingcommitmentto citizen satisfactionsurvey-
ing and publishesits performancedataby departmenton the Internetfor citizens to
review.The jurisdictionsthatemploy both approachesto measuringservice quality
offer an opportunityto gauge the correlationsbetweenobjectiveperformancemea-
suresand subjectivecitizen surveyevaluations.

Service Dimensions and Data


Sources from the Localities
This article examines the correlationbetween citizen satisfactionsurveyresults
conductedin 1997or 1998andinternalperformanceindicatorsas measuredby ICMA
in their 1997 reportalongseven servicedimensionsin 12 cities and 1 county(Austin,
Texas;Bellevue, Washington;Calgary,Ontario,Canada;Dayton,Ohio;FortWorth,
Texas;Gresham,Oregon;KansasCity, Missouri;Norfolk, Virginia;Odessa,Texas;
Phoenix, Arizona;Portland,Oregon;PrinceWilliamCounty,Virginia;and Tempe,

This content downloaded from 139.184.30.136 on Sat, 20 Jul 2013 03:02:09 AM


All use subject to JSTOR Terms and Conditions
Swindell, Kelly / CITIZENSATISFACTIONDATA 37

Arizona). The cities and county were the initial sites selected from the ICMA
benchmarkingprojectparticipantjurisdictionsthathad done good qualitycitizen sat-
isfaction surveys and were willing to sharethe raw data from those surveys with us.
The criteriafor includinga local governmentin the samplewas thattheircitizen satis-
factionsurveyhadbeen done via a randomsamplingprocess,eithermail or telephone,
using generallyacceptedstandardsof surveyresearch.Self-selected surveyswere not
included, nor were measuresof citizen satisfactionderivedthroughfocus groups or
otherqualitativetechniques.Additionally,each site eitherpublishedormade available
the frequenciesin each categoryof responsefor each question.
The seven service dimensionsincludepolice services, fire services andEMS, road
maintenance,refuse services, streetlighting,parks,and libraries.A two-step process
was used to select the indicatorsfor these service dimensions.First, interviewswith
officials in PrinceWilliamCounty,Tempe,KansasCity, Portland,and Austin helped
identify the differentdataitems thatlocal officials thoughtmost indicativeof service
quality.For instance, police administratorsexplained why clearancerates would be
more indicativeof organizationaleffectivenessthanarrestrates.Fire chiefs explained
why fire suppressionshouldbe measuredin residentialratherthancommercialdwell-
ings (confininga fire to one room of a residentialdwelling is a differentlevel of fire
suppressionthan confininga fire to one room in a commercialwarehouse).
Patternsof reportingdatain the ICMAdatabasehasprovedto be anotherlimitation.
When it was obvious thatvery few observationswerereportedfor a dataitem withina
service category,thatitem was not included.This winnowingsometimes led to rela-
tively few dataitems withina service area.Parkservices arean exampleof this, where
data aggregation across participatingcities made input indicators (net and gross
annualoperatingand maintenanceexpenditureper capita, and full-time equivalents
[FTEs]per 1,000 population)the only viable choices forthis initialanalysis.Thus,the
numberof cases availablefor initialanalysislimits this preliminaryexplorationof the
relationshipsamong citizen satisfactionandperformanceindicators.More items will
be includedfor futureanalysisas additionaljurisdictionsareaddedto the database.A
complete list of the data items in the service categoriesis located in AppendixA.

Method
The most significantchallengeconfrontingresearchalong the lines pursuedin this
article is data availability.Many jurisdictionsemploy either performancemeasure-
ment and benchmarkingtechniques or they employ citizen satisfaction surveys to
measure service quality.It is more rarethat a jurisdictionwill employ both tools to
measureaspectsof qualityfor the same service. Forthe subsetof jurisdictionsthatdo
so, thereis a commensurabilityproblemin thatdifferentjurisdictionstendto develop
theirown measurestailoredto whatthey perceiveas the uniquecharacteristicsof their
delivery system and/orthe particularcircumstancein which they are deliveringser-
vices. Similarly,differentjurisdictionsmight wantto ask citizens theirperceptionsof
the qualityof the EMS, buttheymay ask the surveyquestionsin fundamentallydiffer-
ent ways.
To overcome the incommensurabilityproblemin the performancemeasurement
partof the analysis,we consciously chose to look only at cities thatwere participating

This content downloaded from 139.184.30.136 on Sat, 20 Jul 2013 03:02:09 AM


All use subject to JSTOR Terms and Conditions
38 PPMR/ September2000

in the ICMACenterfor PerformanceMeasurementproject.Theseparticipants jointly


developeda series of standardizedperformanceindicatorsfor which datawere to be
collected from all participatingjurisdictions.This was to facilitateICMA'sabilityto
compareonejurisdictionagainstanotherforpurposesof identifyingthose with strong
performanceon certainmeasuresand therebyidentify best practicesthat could be
sharedacrossall local governments.Forourpurposes,it hastheaddedbenefitof using
a standardizedinstrumentto measuresimilarservices across differentjurisdictions.
Performancemeasuresmay now be comparedon the same metric.
Citizensatisfactionsurveys,however,do nothavea consistentformatacrosscities.
Thereis no establishedstandardizedcitizen surveyformatas with the performance
indicators.Each locality usuallyconstructsits own surveyand,if they repeatit occa-
sionally,will notmakechangesin theirinstrumentto facilitatetrendanalysisof citizen
satisfactionover time. This createsa difficultyin termsof calculatinga roadmainte-
nancesatisfactionscore, for instance,when each site asks and/orscores the question
differently.A meta-analyticapproachis requiredforthisstageof theresearchto obtain
comparabilityacrossjurisdictionson the same measures.
Fortunately,Miller and Miller (199la) developeda techniqueto accomplishthis
using the Percentto Max (PTM)scale thattheycreatedthroughan extensiveanalysis
of 3,823 questionsacross261 surveys(see Miller& Miller,199la, foranextensivedis-
cussion of the full methodology used in the meta-analysis).The PTM scale is a
100-pointscale of percentages.A PTMscore of 15, for instance,would indicatethat
the averagesatisfactionwitha givenserviceis verylow.Highscoreson thePTMscale
suggest higheraveragesatisfactionfor the given service.
To convertone questionfromone city's surveyinto a PTMscore,thepercentageof
responsesmustfirstbe combinedinto a meanscore for thatquestion.The unadjusted
PTM score is the questionmeanminusone, dividedby the questionscale maximum
minusone. Oncethe PTMis calculated,it mustbe adjustedto takeinto accountvaria-
tions in how the questionwas worded,the numberof possible answersin the answer
set, the natureof the answerset (relativevs. absolute),skewnessof the answers(i.e.,
more positive answersthannegative),and the anchorwording.By addingand sub-
tractingpoints to the unadjustedPTMscore basedon the natureof how the question
was actuallyasked,it is possibleto calculatethe adjustedPTMscore.Applyingthese
PTM adjustmentsto each of our sites allowed us to generatecomparablescores for
eachjurisdiction,thoughthe wordingin eachof theindividualsurveyswas slightlydif-
ferent.Therewere 14 questionsacross the 13 cases thataddressedthe service areas
takenfrom the ICMAreport(see AppendixB).
At this point,the commensurabilityproblemwas addressed.But thereremaineda
differentproblemin thateachjurisdictionoften askedaboutdifferentservicesin their
surveys.Some municipalitiesnot responsiblefor librariesor fire servicesdid not ask
citizens to ratetheirsatisfactionbecauseresponsibilityfor those serviceswas beyond
thecontrolof thejurisdiction.Thesameproblemarosefortheperformanceindicators.
Not only didjurisdictionsnotmeasureperformanceforservicesnotundertheircontrol
buttheICMAdidnotaskthemto measureall theservicesthatwereundertheircontrol.
Althoughthe ICMAperformancemeasurementprojectcontinuesto expand,thereare
significantservice areasthatarenot coveredby the templateas yet. Forexample,the
ICMA projecthas yet to include K-12 education,sewer/sanitationservices, waste

This content downloaded from 139.184.30.136 on Sat, 20 Jul 2013 03:02:09 AM


All use subject to JSTOR Terms and Conditions
Swindell, Kelly / CITIZENSATISFACTIONDATA 39

water treatment,or other utilities. Thus, the analysis here is limited initially by the
availabilityof jurisdictionsthatemploy bothperformancemeasuresand citizen satis-
factionsurveysas well as the servicescoveredby thosecities in theiruse of these tools.
At the time of this writing, 13jurisdictionshaveprovidedbothdatasets for enough of
the ICMA-coveredservices to facilitate an exploratoryanalysis of the relationship
between objective and subjectiveservice-qualitymeasures.
With these 13 cases, it was possible to calculate the measures of correlation
between the differentperformanceindicatorsand the respective satisfactionscores
from the surveys. Obviously,citizen surveys rarelycontain multiple questions con-
cerninga given public service. However,the goal of our analysis at this phase was to
help identify the relationshipsbetween a single satisfactionquestionand the various
performanceindicatorsavailablefrom the ICMA indicatorsproject.Both strongand
weak correlationswere expected,dependingon the servicetype.Both findingsareper-
tinentto administrators,especiallyif one views satisfactionmeasuresas an outcomeof
the service deliveryprocess.
After going throughthe ICMA participantcities and finding those that had con-
ducted recent citizen surveys (and securingtheirdatafiles), we had 28 performance
indicatorswith an acceptablenumberof observationsand 14 PTMscores acrossthe 13
cases. Althoughadditionalcities arebeing integratedinto the database,these 13 cases
are used for this initialanalysis.Furthermore,the variousperformancemeasuresused
to evaluateeach service area fall into three categoriesspecified by the ICMA. They
measureservice inputs,deliveryefficiency,andservice outcomes.The relationshipof
each categoryof measurewith citizen satisfactionis also analyzedin this article,with
the expectationbeing that citizen satisfactionwill likely be more closely associated
with the performancemeasuresof outcomethanwith the inputor efficiencymeasures.
Unfortunately,therewere too few cities reportingsolid-wastecollection performance
measuredata.Therefore,thatservice areais not analyzedin this article.

Findings
To begin identifyingthe relationshipsbetweenthe performanceindicatorsand the
citizen satisfactionPTMscores,measureswerepairedby serviceareato calculatesim-
ple bivariatePearsoncorrelations(r). Theresultsarepresentedin each tableandreveal
severaltantalizingpatterns.Some of the correlationresults are not reporteddue to a
lack of cases following the pairwisedeletionof missing data.Only resultswith at least
six observationsarepresented.Due to the small numberof observationsfor each cal-
culationandthe lack of randomizationin case selection, we referto thep value of .10
as a referenceonly.It is not usedin the traditionalstatisticalsense to estimatethe likeli-
hood of a set of observationsoccurringby chance.Eachobjectiveservicemeasurewas
comparedwith the overallcitizen satisfactionratingof city qualityand overall satis-
faction with service value from the survey data. In addition,the service areas were
comparedwith the specificmeasuresof servicequalityrelatedto thatservice area.The
correlationsprovide a measureof the strengthof the associationand directionof the
relationshipsbetweenthe performancemeasuresandtheirrespectivecitizen satisfac-
tion score.

This content downloaded from 139.184.30.136 on Sat, 20 Jul 2013 03:02:09 AM


All use subject to JSTOR Terms and Conditions
40 PPMR/ September2000

Table 1. Bivariate Correlations: Police Services

CitizenSurveyMeasures
Performance City Service Police Day Night
Measure Quality Value Quality Safety Safety Enforcement

Police Costs (I)


r -.34 -.17
p< .21 .36
n 8 7
Police Staff (I)
r -.05 -.42 .09 -.22 .02
p< .45 .20 .41 .32 .48
n 9 6 9 7 6
Tickets(E)
r .43 -.40 .18 .10 -.10 .22
p< .11 .19 .31 .41 .41 .33
n 10 7 10 8 8 6
ViolentCrimes(0)
r -.40 -.78 -.32 -.43 -.45 -.13
p< .10 .02 .16 .12 .11 .39
n 12 7 12 9 9 7
PropertyCzimes(0)
r -.23 -.61 -.41 -.27 -.60 -.15
p< .24 .07 .09 .24 .04 .38
n 12 7 12 9 9 7
Police Response(0)
r -.47 -.65 -.67 -.43 -.92
p< .09 .06 .02 .15 .00
n 10 7 10 8 7
ClearRate (0)
r .25 .10 -.05 .02 .13
p< .22 .42 .44 .48 .37
n 12 7 12 9 9
Note. (I) = Inputmeasure,(E) = Efficiencymeasure,and (0) = Outcomemeasure.

The firstset of correlationsexaminedpolice serviceperformanceindicatorsversus


severalcitizensatisfactionmeasures(see Table1). Cityqualityexhibitedtheexpected
relationshipswithtwo of thepolice servicesoutcomemeasures.Thenumberof violent
crimes per 1,000 population(Violent Crimes) was negativelyassociatedwith city
quality.Moreviolence appearsto leadto less satisfaction,as mightbe expected.Simi-
larly,as averageresponsetime (Police Response)increases,satisfactionwith the city
overalldeclines.Theremainingtwo outcomemeasuresdo notexhibitnotablecorrela-
tions with city quality,nordo the inputandefficiencymeasures.Therewas a similar
patternbetweenthe performanceoutcomemeasuresand the PTM scores for service
value.Higherlevels of violentcrimeandpropertycrime,as well as longeraverageser-
vice responsetimes, were all negativelyrelatedwith evaluationsof service value, as
might be expected.
In additionto the generalservicequalityandvaluePTMscores,the police service
measureswere also comparedto fourPTMscoresmoredirectlyrelatedto police ser-

This content downloaded from 139.184.30.136 on Sat, 20 Jul 2013 03:02:09 AM


All use subject to JSTOR Terms and Conditions
Swindell, Kelly / CITIZENSATISFACTIONDATA 41

Table 2. Bivariate Correlations: Road Maintenance Services

CitizenSurveyMeasures
PerformanceMeasure City Quality Service Value StreetRepair
Road Costs (I)
r -.12 -.62 .38
p< .38 .09 .20
n 10 6 7
Road Cost/Mile (E)
r -.55 -.83 -.40
p< .06 .02 .21
n 9 6 6
Cost/Good Road (E)
r -.48 .69
p< .14 .06
n 7 6
Good Roads (0)
r .40 -.36
p< .16 .21
n 8 7
Note. (I) = Inputmeasure,(E) = Efficiencymeasure,and (0) = Outcomemeasure.

vices. A common citizen satisfactionquestion asks respondentsto rate the overall


quality of their police. Once again, only two of the performanceoutcome measures
exhibitedrelativelystrongrelationshipswith this measureof Police Quality:property
crimesandaverageresponsetimes.Higherscoreson thesetwo indicatorsarerelatedto
lower citizen qualityratings.Anothercommon questionin surveys is to ask respon-
dentsto ratetheirsense of safetywhen walkingin theirneighborhoodduringthe day or
at night. None of the correlationsreachedthe .10 cutoff with regardto citizen safety
duringthe daytime(Day Safety),thoughviolentcrimerateandaverageresponsetimes
got close and were in the expected direction.Citizen safety duringthe night (Night
Safety)did exhibitstrongercorrelationswithpropertycrimerateandaverageresponse
times in the expectednegativedirection.Furthermore,violent crimeratewas close to
achieving the cutoff. None of the performanceindicatorsappearrelatedto the PTM
score for qualityof trafficlaw enforcement(Enforcement),includingthe moving vio-
lations citation indicator(Tickets).Interestingly,the clearancerate of violent crimes
by police (ClearRate) does not appearrelatedto any of the PTM scores, even though
professional police administratorssay this is one of the best measures of service
outcomes.
Another focus of the ICMA effort examinesjurisdictionalperformanceon road
maintenanceservices. Fourperformanceindicatorswere availablefromcities provid-
ing citizen satisfactiondata.The correlationsare presentedin Table2. Withregardto
city quality,the only performanceindicatorwith a strongrelationshipis the efficiency
measure of expendituresper paved lane mile maintained(Road Cost/Mile). Higher
expendituresareassociatedwithlowersatisfactionwithcity quality.Similarly,service
value PTM ratingsare lower in areaswith higherroadexpendituresper capita(Road
Costs) and expendituresper lane mile maintained(Road Cost/Mile). These results

This content downloaded from 139.184.30.136 on Sat, 20 Jul 2013 03:02:09 AM


All use subject to JSTOR Terms and Conditions
42 PPMR/ September2000

Table 3. Bivariate Correlations: Streetlighting Services

CitizenSurveyMeasures
PerformanceMeasure CityQuality Service Value LightQuality

LightCosts (I)
r -.42 -.69
p< .11 .06
n 10 6
Light Cost Rate(E)
r -.36 -.19
p< .16 .36
n 10 6
Light Complaints(E)
r .70
p< .06
n 6
Defects (0)
r .63 -.04
p< .05 .47
n 8 6
Replacement(0)
r -.50 .53
p< .07 .11
n 10 7
Note. (I) = Inputmeasure,(E) = Efficiencymeasure,and(0) = Outcomemeasure.

might be expectedfor thosejurisdictionswith poorerroadsthatspendmore on road


maintenance.
Such findingssuggest thatcitizens may be quite savvy at ratingthe efficiency of
maintenanceservices,giventheirpreferences.However,therelationshipof theperfor-
mance indicatorswith the more specific PTM score quality of street maintenance
(StreetRepair)does not supportsucha conclusion.Neitherof the aboverelationships
havea strongassociationwith streetmaintenancesatisfaction.Furthermore, expendi-
turesperpavedlanemile assessedas beingin satisfactorycondition(Cost/GoodRoad)
exhibitsa strongpositiverelationshipwith streetmaintenancesatisfaction.Although
this seems like a good result,it undercutsthe hypothesisthatcitizensaregoodjudges
of efficiency.High cost for a qualityroadis not efficient.Moredamagingis the nega-
tive (albeitweak) associationbetweenstreetmaintenancesatisfactionandpercentage
of lanemiles assessedas beingin satisfactorycondition(GoodRoads).Streetandroad
qualityand maintenancequestionsscore notoriouslylow in citizen satisfactionsur-
veys. Theseresultsmaycastdoubtson citizens'abilityto evaluatestreetquality.Alter-
natively,professionalstreetengineersmay have qualitystandardswell below thatof
citizensin general,possiblydueto thehighcosts of roadmaintenancein the contextof
limitedresources.
A thirdservice areacoveredby the ICMA is streetlighting.Table 3 presentsthe
resultsof the correlationanalysisof streetlightingperformanceindicatorsversusthe
PTMscoresfromcitizensurveys.Intermsof theassociationswithoverallcity quality,

This content downloaded from 139.184.30.136 on Sat, 20 Jul 2013 03:02:09 AM


All use subject to JSTOR Terms and Conditions
Swindell, Kelly / CITIZENSATISFACTIONDATA 43

three of the five indicatorsexhibited a relatively strong association, though two of


these were in an unexpecteddirection.As the numberof complaintsper 1,000 popula-
tion (Light Complaints)increases, so does overall city quality ratings. Also, more
streetlightingdefects (Defects) is associatedwith highercity qualityratings.On the
otherhand,as the averagenumberof days to replacea defective streetlight(Replace-
ment) increases,city qualityscores decline, as expected.These mixed resultssuggest
at least two possibilities. First,it may be thatcritics arerightandthatcitizens are not
capable of accuratelyevaluatingservices. Second, and more defensible in this case,
streetlightingmay not be a significantelementin a citizen's ratingof overallcity qual-
ity. The same may hold truefor overallservice value, which had only one substantive
correlation:higher streetlightingexpenditureswere associated with lower service
value. Unfortunately,it is not possible to evaluate the importanceof streetlighting
qualityon city qualityor service quality,as therewas an insufficientnumberof cities
that asked aboutstreetlightingqualityto runthe correlations.
Previousresearchhas establishedthe importantrole thatparkandrecreationalser-
vices play in citizens' evaluationof theircommunityandeven in the choice of residen-
tial location (e.g., see Miller & Miller, 1991a). Not surprisingly,citizen satisfaction
surveysoften ask questionsaboutsatisfactionwith these services. Parkandrecreation
departmentsin local governmentshavedevelopeda wide arrayof performanceindica-
tors to use in theirplanningandevaluationactivities.The ICMAhas dedicatedone of
its foci to such services. Due to limitationson whatcities havereportedregardingpark
and recreationservices, unfortunately,only performanceinputmeasureswere avail-
able for this analysis.The resultsof the correlationalanalysisof the performanceindi-
cators with the PTM surveyscores are presentedin Table4.
The resultsin Table4 illustrateseveralinterestingassociationsthatgenerallysup-
portthe findingsin otherresearchon the importanceof parksin termsof overallquality
of life. Both net and gross expendituresper capita(Net Costs andGrossCosts) exhib-
ited positive associations with overall city quality. This may be driven by capital
expendituresbecause the operationalcosts of supportingmore full-time equivalent
employees will be less andthe numberof these FTEsper 1,000 population(ParkStaff)
has a substantivenegativeassociationwith city quality.Thus, it may be the case that
citizens like more parkswith fewer workers(a finding strengthenedby the negative
association with overall service value in relationto FTEs). Withoutadditionaleffi-
ciency and outcome measures, however, a more confident conclusion about park
investmenteffects on city qualitycannotbe drawnfrom these data.
PTMscores of citizen satisfactionwithparksandrecreationservices also exhibited
associationsin the expecteddirections.Grossexpenditures(GrossCosts) was associ-
ated strongly with both park quality and recreationquality PTM scores. The only
counterintuitiveresultwas the negative(thoughnot substantive)associationbetween
parksatisfactionand FTEs.
The fifth service area includedfrom the ICMA projectwas libraryservices. The
number of available performancemeasures includes one input indicator,one effi-
ciency indicator,and two outcome indicators.Libraryquality questions are not as
common in citizen satisfactionreportsconductedby cities becauseoftentimes,library
services are provided by county or special purposejurisdictions.Not surprisingly,

This content downloaded from 139.184.30.136 on Sat, 20 Jul 2013 03:02:09 AM


All use subject to JSTOR Terms and Conditions
44 PPMR/ September2000

Table 4. Bivariate Correlations: Parks and Recreation Services

CitizenSurveyMeasures
PerformanceMeasure CityQuality Service Value ParkQuality Recreation

Net Costs (I)


r .42 -.22 .38 .27
p< .10 .32 .14 .28
n 11 7 10 7
Gross Costs (I)
r .59 .15 .60 .58
p< .03 .37 .03 .09
n 11 7 10 7
ParkStaff (I)
r -.55 -.71 -.34
p< .08 .06 .23
n 8 6 7
Note. (I) = Inputmeasure,(E) = Efficiencymeasure,and (0) = Outcomemeasure.

Table 5. Bivariate Correlations: Library Services

CitizenSurveyMeasures
PerformanceMeasure CityQuality Service Value LibraryQuality

LibraryCosts (I)
r .16 .80 .48
p< .37 .03 .17
n 7 6 6
Costs/ Borrower(E)
r .20 .54 .38
17< .34 .13 .23
n 7 6 6
CirculationRate (0)
r .81 .58 .80
p< .01 .11 .03
n 7 6 6
Borrowers(0)
r .10 .23 .15
p< .42 .33 .39
n 7 6 6
Note. (I) = Inputmeasure,(E) = Efficiencymeasure,and (0) = Outcomemeasure.

then,thenumberof observationsin thissectionis smallerthanin theprevioussections.


The correlationresultsarepresentedin Table5.
Onlyone of thefourperformanceindicatorsexhibitedanassociationwithcity qual-
ity. The performanceoutcome measureannualcirculationper capita (Circulation
Rate)is positivelyandstronglyrelatedto overallcity quality.Citizensin communities
withhigh circulationrateslike theircommunitymoreso thando citizensin communi-
ties withlowerrates.Cityqualitywas not associatedwithanyof the otherlibrarymea-

This content downloaded from 139.184.30.136 on Sat, 20 Jul 2013 03:02:09 AM


All use subject to JSTOR Terms and Conditions
Swindell, Kelly / CITIZENSATISFACTIONDATA 45

Table 6. Bivariate Correlations: Fire Services and Emergency Medical Services (EMS)

CitizenSurveyMeasures
PerformanceMeasure City Quality Service Value Fire Service Quality EMS Quality

House Fires (0)


r -.50 .10
p< .13 .42
n 7 7
All Fires (0)
r -.22 .54 .08
p< .27 .05 .43
n 10 10 7
Fire Response (0)
r .13 -.25
p< .38 .29
n 8 7
Basic Response (0)
r -.37 -.29
p< .18 .24
n 8 8
AdvancedResponse (0)
r .34 -.37
p< .23 .21
n 7 7
Note. (0) = Outcome measure.

sures.Overallservice valuewas anotherPTMscorethathada positiveandstrongrela-


tionship, this time with libraryexpenditures(LibraryCosts), an inputmeasure.The
associationbetween service value and circulation(an outcome measure)approaches
the cutoff andis in the expecteddirection.Similarly,thereis a strongpositive correla-
tion betweenlibraryqualityfromthe citizen surveysandcirculationper capita(Circu-
lation Rate). The percentageof the populationin the service areathat are registered
borrowers(Borrowers)did not exhibit the expected relationship.Though positive,
therewere insufficientcases to determineif the relationshipwas substantive.
The last service areacoveredby the ICMAprojectandourcollection of citizen sur-
veys is fireservices andEMS. The batteryof performancemeasuresavailablefromthe
ICMA is extensive,butthe availabilityof datafromcities using a large numberof the
indicatorspresentsa challenge.Of the 13 performancemeasuresidentifiedas best by
fire service andEMS professionals,only 5 arecommonly employedby most cities in
the ICMA project.Fire services andEMS arefrequentlyamongthe highestratedser-
vices providedby governmentsacrossmanycitizen surveys.However,not alljurisdic-
tions maintaintheirown fire services or EMS and thereforenot all cities ask citizens
aboutthem.This poses a problemas evidencedbelow.Forthosecities thatdid provide
data on their fire services and EMS, the correlationswith service area performance
indicatorsare presentedin Table6.
Only one of the bivariatecorrelationsexhibiteda substantivelystrongrelationship.
Jurisdictionsin which citizens tend to ratefire service qualityhighertend to be juris-

This content downloaded from 139.184.30.136 on Sat, 20 Jul 2013 03:02:09 AM


All use subject to JSTOR Terms and Conditions
46 PPMR/ September2000

dictionsthathavemorefireincidentsper 100,000population.This is a difficultresult


to interpret,particularlyif one wantsto infercausation.Do citizensratefire services
highly because thereare more opportunitiesto see them in action?If so, then there
shouldbe a negativeratingon the fireservices'abilityto addresscommunityrisk(too
manyfires).In addition,if cities wantto understandthis situation,thenmorespecific
questionsaboutthe variouselementsof fireserviceneedto be asked.The overallscor-
ing of qualitymay simply be mixing too manyelementstogether.
The resultson the othercorrelationsdo not helpdisentanglethissituation.Citizens
do like theirfireservicesandEMS.Buttheseevaluationsarenotbasedon personaluse
of theservices.Otherfactorsmaybe drivingtheseevaluations,suchas mediaportrayal
of firepersonnel.In Dayton,Ohio,EMSusersratedsuchservicesmuchlowerin qual-
ity than nonusers.However,users of fire services were even more intense in their
approvalof fire service qualitythannonusers.Clearly,thereis a need for more evi-
dence to understandbetterthe relationshipsof subjectivequalityratingsversus the
more objectiveperformanceindicators.

Summary and Conclusions


This articlerepresentsthe initialstageof a largerresearchprojectthatexploresthe
linkagesbetweenvariousindividualobjectivepublicservice performanceindicators
and the subjectivecitizen satisfactionquestionto which it is related.Futureresearch
will move beyondthis basic analysisto look at how groupsof indicatorsrelateto one
anotherandmodelthepossiblemeansby whichindicatorsmightbe expectedto influ-
ence satisfactionamong citizens. Such an approachwould be useful in empirically
testinga moreformalmodelof input,efficiency,andoutputperformancemeasuresas
explanatoryfactorsof citizen service satisfaction.
This initialreviewof the availabledataalso suggeststhatcitizensarebetterable to
evaluateservices than some critics might suggest. In each table, we presented113
bivariatepairingsof subjectivecitizenserviceevaluationswithobjectiveperformance
indicators.Of these, 88 had sufficientdatato allow an initialcorrelationcalculation.
Whereasonly 27 of these 88 met the .10 cutoffpoint,only 4 of the 27 (15%)werein a
directionoppositethatexpected.Ignoringthe cutoff point and focusing only on the
directionof the 88 correlations,75 (85%)hadrelationshipsin the predicteddirection.
So it maybe thatcitizensarecapableof distinguishinggood frombadservicesin terms
of inputs,efficiency,andoutputs.Butadditionaldatafrommanymorecities usingper-
formancemeasuresand citizen surveysarerequiredbeforethis debatecan be put to
rest.
The findingsfromthisline of researchhaveimportantimplicationsfor administra-
tors and elected officials. Despite the problemsalreadyidentifiedwith both perfor-
mance measurementand citizen surveying,they remainthe primarytools of service
qualityevaluationat thelocal level.Thefactthatit is verydifficultto measureprogram
effectivenessor servicequalityin a totallyobjectivefashionshouldneverdiscourage
attemptsto developandrefinebetterindicators.The fact thatcitizensmay not under-
standthedifferencebetweentheroleof theirlocal governmentas serviceproviderand
serviceproducerdoes notmaketheirevaluationof servicequalityanyless meaningful.

This content downloaded from 139.184.30.136 on Sat, 20 Jul 2013 03:02:09 AM


All use subject to JSTOR Terms and Conditions
Swindell,Kelly/ CITIZEN
SATISFACTION
DATA 47

A betterunderstandingof the nexus betweenthe objectivemeasureof service quality


and the subjectivemeasureof service qualityreinforcesthe point thatthe goal of both
efforts is to improveservice quality.
Theremay be anotheradvantageto a two-prongedeffort at service qualityevalua-
tion. The pointof a citizen satisfactionsurveyis not to engage specialistsin a technical
debate but to engage citizens in the process of governancein a meaningfulway. As
such, it is an outreachtool becauseit includespeople who may not have had any rele-
vantexperiencewith governmentor even feel connectedto government.Watsonet al.
(1991) notedthatthe flow of informationfromthe surveyingprocesswas notunidirec-
tional.Citizensuse the surveyto communicateto city council,andcity counciluses the
instrumentto educatecitizens and to communicateto citizens thatit values theirper-
ceptions about service quality (Watsonet al., 1991, p. 238; see also Sharp, 1990).
Wildavsky(1979) cautions,

Thetemptation of theanalystis totreatcitizensasobjects.By depriving


peopleof auton-
omyin thought(theirconsciousness is false,theirexperienceinvalid),it is possibleto
denythemcitizenshipin action.Themoralroleof theanalyst,therefore, demandsthat
cogitationenhancethevaluesof interaction and[notbecomeasubstitute] forit.(p.277)

This researcheffort,then,is intendedto enhanceourunderstandingof the interaction


between citizens' evaluationof local governmentservice qualityand the understand-
ing that local service providershave of their own effectiveness. The "cogitation"
involves asking hardquestionsaboutthe strengthsand weaknessesof citizen surveys
andperformancemeasurementpractices,butwith the convictionthatcitizens have an
importantand legitimaterole to informservice decisions.

APPENDIX A
International City/County Management Association
(ICMA) Benchmarking Project Data Items Used in
Analysis Across Seven Service Dimensions, 1997

POLICE

Inputs

* Expenditurespercapita(PoliceCosts)
* Numberof full-timeequivalent(FTE)staff(swornandcivilian)per 1,000population
(PoliceStaff)

Efficiency

* Numberof violentcrimesclearedperFTEswornstaffmember(ViolentCrimesCleared)
* Numberof propertycrimesclearedper FTEswornstaff member(PropertyCrimes
Cleared)
* Numberof arrestsperFTEswornstaffmember(Arrests)
* Numberof movingviolationcitationsissuedper1,000population
(Tickets)

This content downloaded from 139.184.30.136 on Sat, 20 Jul 2013 03:02:09 AM


All use subject to JSTOR Terms and Conditions
48 PPMR/ September
2000

Outcomes

* Numberof violent crimesper 1,000 population(ViolentCrimes)


* Numberof propertycrimesper 1,000 population(PropertyCrimes)
* Averageresponsetime to top prioritycalls fromreceiptof call by governmentto arrival
(Police Response)
* Percentageof violent crimesclearedper FIE swornstaff member(ClearRate)

ROAD MAINTENANCE

Inputs

* Total operatingand maintenanceexpenditurespercapita(RoadCosts)

Efficiency

* Total capitaland operatingexpendituresper paved lane mile maintainedfor which the


jurisdictionis responsible(RoadCosts/Mile)
* Operatingandmaintenanceexpendituresperpavedlane mile assessed as being in satis-
factorycondition(Cost/GoodRoads)

Outcomes

* Percentageof lane miles assessed as being in satisfactorycondition(Good Roads)

STREETLIGHTING

Inputs

* Total annualoperatingand maintenanceexpendituresfor streetlightsper capita(Light


Costs)

Efficiency

* Total operatingand maintenanceexpendituresper streetlightmaintained(Light Cost


Rate)

Outcomes

* Numberof complaintsaboutstreetlightsper 1,000 population(LightComplaints)


* Numberof streetlightingdefects per 1,000 streetlights(Defects)
* Averagenumberof days to replacea defective streetlight(Replacement)

PARKS AND RECREATION

Inputs

* Net annualoperatingand maintenanceexpenditureper capita(Net Costs)


Gross annualoperatingand maintenanceexpenditureper capita(GrossCosts)
FlEs per 1,000 population(ParkStaff)

This content downloaded from 139.184.30.136 on Sat, 20 Jul 2013 03:02:09 AM


All use subject to JSTOR Terms and Conditions
Swindell, Kelly / CITIZENSATISFACTIONDATA 49

LIBRARIES

Inputs

* Total libraryexpendituresper capita(LibraryCosts)

Efficiency

* Expendituresper registeredborrower(Costs/Borrower)
* Annualcirculationper FTE (Circulation)

Outcomes

* Annualcirculationper capita(CirculationRate)
* Total libraryactivity per capita(Activity)
* Percentageof populationin service areawho are registeredborrowers(Borrowers)

FIRE SERVICES AND EMERGENCY MEDICAL SERVICES (EMS)

Inputs

* Total operatingexpendituresper capita(Fire Costs)


* Total sworn personnel(Fire Staff)
* Total sworn and civilian FTEs per 1,000 population(All Fire Staff)

Fire Outcomes (CommunityRisk)

* Total residentialdwelling structurefire incidents per 1,000 total residentialdwelling


structures(House Fires)
* Total structurefile incidentsper 100,000 population(BuildingFires)
* Total fire incidentsper 100,000 population(All Fires)
* Civilian injuriesanddeathsin structurefires, all types of structures,per 100,000 popula-
tion (Deaths)

Fire Outcomes (Suppression)

* Average time from dispatchto arrivalfor fire suppressioncalls (Fire Response)


* Percentageof all structurefire incidentswhereflame spreadwas confinedto the room of
origin (Confined)
* Numberof fire personnelinjuriesper 1,000 structurefire incidents(Injuries)

EMS Outcomes

* Average time from call entryto arrivalfor calls requiringa basic life supportresponse
(Basic Response)
* Average time from dispatch to arrival for calls requiring an advanced life support
response(AdvancedResponse)
* Percentage of full cardiac arrestpatients delivered to a medical facility with a pulse
(Alive)
Source. ICMA(1998).

This content downloaded from 139.184.30.136 on Sat, 20 Jul 2013 03:02:09 AM


All use subject to JSTOR Terms and Conditions
50 PPMR/ September2000

APPENDIX B
Citizen Satisfaction Items Used in Analysis

* Overallqualityof city (City Quality)


* Overallvalue of city services (ServiceValue)
* Satisfactionwith libraryservices(LibraryQuality)
* Satisfactionwith emergencymedicalservices(EMS Quality)
* Satisfactionwith fire services (FireServiceQuality)
* Satisfactionwith police services (Police Quality)
* Satisfactionwith garbagecollectionservices (TrashQuality)
* Satisfactionwith parks(ParkQuality)
* Satisfactionwith recreationfacilities(RecreationQuality)
* Satisfactionwith streetmaintenance(StreetRepairs)
* Satisfactionwith streetlighting(LightQuality)
* Sense of safety duringthe day (Day Safety)
* Sense of safety at night(Night Safety)
* Satisfactionwith trafficenforcement(Enforcement)

References
Affholter,D. (1994). Outcomemonitoring.In J. Wholey,H. Hatry,andK. Newcomer(Eds.),Handbookof
practicalprogramevaluation(pp. 96-118). San Francisco:Jossey-Bass.
Ammons, D. (1996). Municipalbenchmarks:Assessing local performanceand establishingcommunity
standards.ThousandOaks,CA: Sage.
Ammons,D. (1999). A propermentalityforbenchmarking. Review,59(2), 105-109.
PublicAdministration
Brown,K., & Coulter,P. B. (1983). Subjectiveand objectivemeasuresof police service delivery.Public
AdministrationReview,42(1), 50-58.
Coe, C. (1999). Localgovernmentbenchmarking: Lessonsfromtwo majormultigovernment efforts.Public
AdministrationReview,59(2), 110-123.
Cohen,S., & Eimicke,W. (1998, May9-13). Theuse of citizensurveysin measuringagencyperformance:
Thecase of the New YorkCityDepartmentof Parksand Recreation.Paperpresentedat the American
Society for PublicAdministrationannualmeeting,Seattle,WA.
Decker,L. L., & Manion,P.(1987). Performancemeasuresin Phoenix:Trendsandanew direction.National
CivicReview,76(2), 119-129.
Drebin,A. (1980, December3-7). Criteriafor performancemeasurementin state and local government.
GovernmentalFinance,9, 3-7.
Fisher,R. (1994, September2-8). An overviewof performancemeasurement.PublicManagement(ICMA),
76, 2-8.
Garsombke,H., & Schrad,J. (1999, February).Performancemeasurementsystems:Resultsfroma city and
state survey.GovernmentFinanceReview,15, 9-12.
Glaser,M. (1994). Tailoringperformance Fromgenericto germane.Public
measuresto fit the organization:
Productivityand ManagementReview,17(3), 303-319.
Grizzle,G. (1987). Linkingperformanceto fundingdecisions:Whatis the budgeter'srole?PublicProduc-
tivityReview,10(3), 33-44.
Hatry,H., Gerhart,C., & MarshallM. (1994, September15-18). Elevenways to makeperformancemea-
surementmore usefulto publicmanagers.Public Management(ICMA),76, 15-18.
Hatry,H. P. (1980). Perfonmancemeasurementprinciplesandtechniques:An overviewfor local govern-
ment. Public ProductivityReview,4(4), 312-339.
IntemationalCity/CountyManagementAssociation(ICMA).(1974). Measuringthe effectivenessof basic
municipalservices. Washington,DC: Author.

This content downloaded from 139.184.30.136 on Sat, 20 Jul 2013 03:02:09 AM


All use subject to JSTOR Terms and Conditions
Swindell, Kelly / CITIZENSATISFACTIONDATA 51

InternationalCity/CountyManagementAssociation(ICMA).(1996). Comparativeperformancemeasure-
ment:FY 1995 data report.Washington,DC: Author.
InternationalCity/CountyManagementAssociation(ICMA).(1997). Comparativeperformancemeasure-
ment:FY 1996 data report.Washington,DC: Author.
InternationalCity/CountyManagementAssociation(ICMA).(1998). Comparativeperformancemeasure-
ment:FY 1997 data report.Washington,DC: Author.
Kopczynski,M., & Lombardo,M. (1999). Comparativeperformancemeasurement:Insightsand lessons
learnedfrom a consortiumeffort. Public AdministrationReview,59(2), 124-134.
Lineberry,R. L. (1977). Equalityand urbanpolicy: Thedistributionof municipalservices. Beverly Hills,
CA: Sage.
Lovrich,N. P.,Jr.,& Taylor,G. T. (1976). Neighborhoodevaluationof local governmentservices:A citizen
survey approach.UrbanAffairsQuarterly,12(2), 197-221.
Lyons, W. E., Lowery,D., & DeHoog, R. H. (1992). Thepolitics of dissatisfaction:Citizens,services, and
urban institutions.Arming,NY: M. E. Sharpe.
MacManus,S. (1984). Intergovernmental dimensionsof urbanfiscal stress. Publius, 14(1), 1-82.
Melkers,J., & Thomas,J. C. (1998). Whatdo administrators thinkcitizensthink?Administratorpredictions
as an adjunctto citizen surveys.Public AdministrationReview,58(4), 327-334.
Milbraith,L., & Goel, M. (1977). Politicalparticipation:How and whydo people get involvedinpolitics?
(2nd ed.). Chicago:RandMcNally.
Miller,T. I., & Miller,M. A. (1991a). Citizensurveys:How to do them,how to use them,what theymean.
Washington,DC: InternationalCity/CountyManagementAssociation.
Miller,T. I., & Miller,M. A. (1991b). Standardsof excellence: US residents'evaluationsof local govern-
ment services. Public AdministrationReview,52(6), 503-514.
Miller, T. I., & Miller, M. A. (1992). Assessing excellence poorly:The bottom line in local government.
Journal of Policy Analysis and Management,11(4), 612-623.
Nagel, J. (1987). Participation.Englewood Cliffs, NJ: Prentice-Hall.
Ostrom,V., Tiebout,C., & Warren,R. (1961, December).The organizationof governmentin metropolitan
areas:A theoreticalinquiry.AmericanPolitical Science Review,55, 831-842.
Parks, R. B. (1984). Linking objective and subjective measures of performance.Public Administration
Review,44(2), 118-127.
Percy, S. L. (1986). In defense of citizen evaluationsas performancemeasures. UrbanAffairs Quarterly,
22(1), 66-83.
Rosentraub,M. S., & Thompson,L. (1981). The use of surveysof satisfactionforevaluations.PolicyStudies
Journal, 9(7), 990-999.
Sharp,E. (1990). Urbanpolitics and administration:Fromservice deliveryto economicdevelopment.New
York:Longman.
Stipak, B. (1977). Attitudesand belief systems concerningurbanservices. Public OpinionQuarterly,41,
41-55.
Stipak, B. (1979). Citizen satisfactionwith urbanservices: Potentialmisuse as a performanceindicator.
Public AdministrationReview,39(1), 46-52.
Streib,G., & Poister,T. (1998). Performancemeasurementin municipalgovernments.In The1998 munici-
pal yearbook (pp. 9-15). Washington,DC: InternationalCity/CountyManagementAssociation.
Swindell, D. (1999). Cityof Dayton, Ohio: Volume1: Parks,recreation,and culturalaffairs survey,1998.
Dayton, OH: Centerfor Urbanand Public Affairs.
Thompson, L. (1997). Citizen attitudesabout service delivery modes. Journal of Urban Affairs, 19(3),
291-302.
Tiebout, C. (1956). A pure theory of local governmentexpenditures.Journal of Political Economy,44,
416-424.
Verba,S., & Nie, N. (1972). Participationin America.New York:Harperand Row.
Verba,S., Scholzman,K., Brady,H., & Nie, N. (1993). Citizen activity:Who participates?What do they
say? AmericanPolitical Science Review,87(2), 303-318.
Walters,J. (1998). Measuringup: Governing'sguide to performancemeasurementfor geniuses [and other
public managers]. Washington,DC: GoverningBooks.

This content downloaded from 139.184.30.136 on Sat, 20 Jul 2013 03:02:09 AM


All use subject to JSTOR Terms and Conditions
52 PPMR/ September2000

Watson,D. J., Juster,R. J., & Johnson,G. W.(1991). Institutionalized


use of citizensurveysin thebudgetary
and policy-makingprocesses:A small city case study.PublicAdministrationReview,51(3), 232-239.
Wholey,J., & Hatry,H. (1992). Thecase forperformancemonitoring.PublicAdministration Review,52(6),
604-610.
Wildavsky,A. (1979).Speakingtruthtopower: Theartandcraftofpolicyanalysis.Boston:Little,Brown.

DavidSwindellis an assistantprofessoranddirectoroftheMasterofPublicAdministra-
tionprogramat ClemsonUniversity.His researchfocuseson citizensatisfactionsurvey-
ing and theuse of neighborhoodorganizationsas a mechanismfordeliveringurbanpub-
lic services. His researchhas been published in the Joumal of UrbanAffairs, Public
AdministrationReview, Nonprofit and VoluntarySector Quarterly,and Economic
DevelopmentQuarterly.Contact:dswinde@clemson.edu

JanetM. Kellyis an assistantprofessorof PoliticalScienceat the Universityof Tennes-


see, Knoxville,whereshe teachespublicfinance and intergovernmental relations.She
haspublishedon thetopicsof stateandfederalmandates,devolution,andmunicipalser-
vice performanceassessmentin a varietyof academicandprofessionaljournals. Con-
tact:jmkelly@utkedu

This content downloaded from 139.184.30.136 on Sat, 20 Jul 2013 03:02:09 AM


All use subject to JSTOR Terms and Conditions

You might also like