Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

RESEARCH

Measuring Service Quality: includes research articles that focus on the


analysis and resolution of managerial and
SERVQUAL vs. SERVPERF Scales academic issues based on analytical and
empirical or case research

Sanjay K Jain and Garima Gupta

Executive Summary Quality has come to be recognized as a strategic tool for attaining operational efficiency
and improved business performance. This is true for both the goods and services sectors.
However, the problem with management of service quality in service firms is that quality
is not easily identifiable and measurable due to inherent characteristics of services which
make them different from goods. Various definitions of the term ‘service quality’ have
been proposed in the past and, based on different definitions, different scales for
measuring service quality have been put forward. SERVQUAL and SERVPERF constitute
two major service quality measurement scales. The consensus, however, continues to
elude till date as to which one is superior.
An ideal service quality scale is one that is not only psychometrically sound but is also
diagnostically robust enough to provide insights to the managers for corrective actions
in the event of quality shortfalls. Empirical studies evaluating validity, reliability, and
methodological soundness of service quality scales clearly point to the superiority of the
SERVPERF scale. The diagnostic ability of the scales, however, has not been explicitly
explicated and empirically verified in the past.
The present study aims at filling this void in service quality literature. It assesses the
diagnostic power of the two service quality scales. Validity and methodological
soundness of these scales have also been probed in the Indian context — an aspect which
has so far remained neglected due to preoccupation of the past studies with service
industries in the developed world.
Using data collected through a survey of consumers of fast food restaurants in Delhi,
the study finds the SERVPERF scale to be providing a more convergent and discriminant-
valid explanation of service quality construct. However, the scale is found deficient in
its diagnostic power. It is the SERVQUAL scale which outperforms the SERVPERF scale
by virtue of possessing higher diagnostic power to pinpoint areas for managerial
interventions in the event of service quality shortfalls.
The major managerial implications of the study are:
¾ Because of its psychometric soundness and greater instrument parsimoniousness,
one should employ the SERVPERF scale for assessing overall service quality of a
firm. The SERVPERF scale should also be the preferred research instrument when
one is interested in undertaking service quality comparisons across service
industries.
¾ On the other hand, when the research objective is to identify areas relating to
service quality shortfalls for possible intervention by the managers, the SERVQUAL
scale needs to be preferred because of its superior diagnostic power.
However, one serious problem with the SERVQUAL scale is that it entails gigantic
KEY WORDS data collection task. Employing a lengthy questionnaire, one is required to collect data
about consumers’ expectations as well as perceptions of a firm’s performance on each
Service Quality of the 22 service quality scale attributes.
Measurement of Service Quality Addition of importance weights can further add to the diagnostic power of the
SERVQUAL scale, but the choice needs to be weighed against the additional task of data
Service Quality Scale collection. Collecting data on importance scores relating to each of the 22 service attributes
Scale Validity and Reliability is indeed a major deterrent. However, alternative, less tedious approaches, discussed to-
wards the end of the paper, can be employed by the researchers to obviate the data col-
Diagnostic Ability of Scale lection task.

VIKALPA • VOLUME 29 • NO 2 • APRIL - JUNE 2004 25

25
Q
uality has come to be recognized as a strategic SERVICE QUALITY: CONCEPTUALIZATION
tool for attaining operational efficiency and AND OPERATIONALIZATION
improved business performance (Anderson and Quality has been defined differently by different au-
Zeithaml, 1984; Babakus and Boller, 1992; Garvin, 1983; thors. Some prominent definitions include ‘conformance
Phillips, Chang and Buzzell, 1983). This is true for the to requirements’ (Crosby, 1984), ‘fitness for use’ (Juran,
services sector too. Several authors have discussed the 1988) or ‘one that satisfies the customer’ (Eiglier and
unique importance of quality to service firms (e.g., Langeard, 1987). As per the Japanese production phi-
Normann, 1984; Shaw, 1978) and have demonstrated its losophy, quality implies ‘zero defects’ in the firm’s
positive relationship with profits, increased market share, offerings.
return on investment, customer satisfaction, and future Though initial efforts in defining and measuring
purchase intentions (Anderson, Fornell and Lehmann service quality emanated largely from the goods sector,
1994; Boulding et al., 1993; Buzzell and Gale, 1987; Rust a solid foundation for research work in the area was laid
and Oliver, 1994). One obvious conclusion of these studies down in the mid-eighties by Parasuraman, Zeithaml and
is that firms with superior quality products outperform Berry (1985). They were amongst the earliest researchers
those marketing inferior quality products. to emphatically point out that the concept of quality
Notwithstanding the recognized importance of prevalent in the goods sector is not extendable to the
service quality, there have been methodological issues services sector. Being inherently and essentially intan-
and application problems with regard to its operation- gible, heterogeneous, perishable, and entailing simulta-
alization. Quality in the context of service industries neity and inseparability of production and consump-
has been conceptualized differently and based on dif- tion, services require a distinct framework for quality
ferent conceptualizations, alternative scales have been explication and measurement. As against the goods sector
proposed for service quality measurement (see, for where tangible cues exist to enable consumers to eva-
instance, Brady, Cronin and Brand, 2002; Cronin and luate product quality, quality in the service context is
Taylor, 1992, 1994; Dabholkar, Shepherd and Thorpe, explicated in terms of parameters that largely come
2000; Parasu- raman, Zeithaml and Berry, 1985, 1988). under the domain of ‘experience’ and ‘credence’ prop-
Despite considerable work undertaken in the area, there erties and are as such difficult to measure and evaluate
is no consensus yet as to which one of the measurement (Parasuraman, Zeithaml and Berry, 1985; Zeithaml and
scales is robust enough for measuring and comparing Bitner, 2001).
service quality. One major problem with past studies One major contribution of Parasuraman, Zeithaml
has been their preoccupation with assessing psycho- and Berry (1988) was to provide a terse definition of
metric and metho- dological soundness of service scales service quality. They defined service quality as ‘a global
that too in the context of service industries in the de- judgment, or attitude, relating to the superiority of the
veloped countries. Virtually no empirical efforts have service’, and explicated it as involving evaluations of the
been made to eva- luate the diagnostic ability of the outcome (i.e., what the customer actually receives from
scales in providing managerial insights for corrective service) and process of service act (i.e., the manner in which
actions in the event of quality shortfalls. Furthermore, service is delivered). In line with the propositions put
little work has been done to examine the applicability forward by Gronroos (1982) and Smith and Houston
of these scales to the service industries in developing (1982), Parasuraman, Zeithaml and Berry (1985, 1988)
countries. posited and operationalized service quality as a differ-
This paper, therefore, is an attempt to fill this existing ence between consumer expectations of ‘what they want’
void in the services quality literature. Based on a survey and their perceptions of ‘what they get.’ Based on this
of consumers of fast food restaurants in Delhi, this paper conceptualization and operationalization, they proposed
assesses the diagnostic usefulness as well as the psycho- a service quality measurement scale called ‘SERVQUAL.’
metric and methodological soundness of the two widely The SERVQUAL scale constitutes an important land-
advocated service quality scales, viz., SERVQUAL and mark in the service quality literature and has been
SERVPERF. extensively applied in different service settings.

26 MEASURING SERVICE QUALITY: SERVQUAL vs. SERVPERF SCALES

26
Over time, a few variants of the scale have also been k
proposed. The ‘SERVPERF’ scale is one such scale that SQ i = ∑ ( Pij − E ij ) (1)
j=1
has been put forward by Cronin and Taylor (1992) in
the early nineties. Numerous studies have been under- where: SQi = perceived service quality of indivi-
taken to assess the superiority of two scales, but con- dual ‘i’
sensus continues to elude as to which one is a better k = number of service attributes/items
scale. The following two sections provide an overview P = perception of individual ‘i’ with res-
of the operationalization and methodological issues pect to performance of a service firm
concerning these two scales. attribute ‘j’
E = service quality expectation for at-
SERVQUAL Scale tribute ‘j’ that is the relevant norm for
The foundation for the SERVQUAL scale is the gap individual ‘i’
model proposed by Parasuraman, Zeithaml and Berry The importance of Parasuraman, Zeithaml and
(1985, 1988). With roots in disconfirmation paradigm, 1 Berry’s (1988) scale is evident by its application in a
the gap model maintains that satisfaction is related to number of empirical studies across varied service set-
the size and direction of disconfirmation of a person’s tings (Brown and Swartz, 1989; Carman, 1990; Kassim
experience vis-à-vis his/her initial expectations (Church- and Bojei, 2002; Lewis, 1987, 1991; Pitt, Gosthuizen and
ill and Surprenant, 1982; Parasuraman, Zeithaml and Morris, 1992; Witkowski and Wolfinbarger, 2002; Young,
Berry, 1985; Smith and Houston, 1982). As a gap or Cunningham and Lee, 1994). Despite its extensive ap-
difference between customer ‘expectations’ and ‘percep- plication, the SERVQUAL scale has been criticized on
tions,’ service quality is viewed as lying along a con- various conceptual and operational grounds. Some major
tinuum ranging from ‘ideal quality’ to ‘totally unaccept- objections against the scale relate to use of (P-E) gap
able quality,’ with some points along the continuum scores, length of the questionnaire, predictive power of
representing satisfactory quality. Parasuraman, Zeith- the instrument, and validity of the five-dimension struc-
aml and Berry (1988) held that when perceived or ex- ture (e.g., Babakus and Boller, 1992; Cronin and Taylor,
perienced service is less than expected service, it implies 1992; Dabholkar, Shepherd and Thorpe, 2000; Teas, 1993,
less than satisfactory service quality. But, when per- 1994). Since this paper does not purport to examine
ceived service is less than expected service, the obvious dimensionality issue, we shall confine ourselves to a
inference is that service quality is more than satisfactory. discussion of only the first three problem areas.
Parasuraman, Zeithaml and Berry (1988) posited that Several issues have been raised with regard to use
while a negative discrepancy between perceptions and of (P-E) gap scores, i.e., disconfirmation model. Most
expectations — a ‘performance-gap’ as they call it — studies have found a poor fit between service quality
causes dissatisfaction, a positive discrepancy leads to as measured through Parasuraman, Zeithaml and Ber-
consumer delight. ry’s (1988) scale and the overall service quality measured
Based on their empirical work, they identified a set directly through a single-item scale (e.g., Babakus and
of 22 variables/items tapping five different dimensions Boller, 1992; Babakus and Mangold, 1989; Carman, 1990;
of service quality construct.2 Since they operationalized Finn and Lamb, 1991; Spreng and Singh, 1993). Though
service quality as being a gap between customer’s ex- the use of gap scores is intuitively appealing and con-
pectations and perceptions of performance on these ceptually sensible, the ability of these scores to provide
variables, their service quality measurement scale is additional information beyond that already contained
comprised of a total of 44 items (22 for expectations and in the perception component of service quality scale is
22 for perceptions). Customers’ responses to their ex- under doubt (Babakus and Boller, 1992; Iacobucci,
pectations and perceptions are obtained on a 7-point Grayson and Ostrom, 1994). Pointing to conceptual,
Likert scale and are compared to arrive at (P-E) gap theoretical, and measurement problems associated with
scores. The higher (more positive) the perception minus the disconfirmation model, Teas (1993, 1994) observed
expectation score, the higher is perceived to be the level that a (P-E) gap of magnitude ‘-1’ can be produced in
of service quality. In an equation form, their operation- six ways: P=1, E=2; P=2, E=3; P=3, E=4; P=4, E=5; P=5,
alization of service quality can be expressed as follows: E=6 and P=6, E=7 and these tied gaps cannot be con-

VIKALPA • VOLUME 29 • NO 2 • APRIL - JUNE 2004 27

27
strued as implying equal perceived service quality They questioned the conceptual basis of the SERVQUAL
shortfalls. In a similar vein, the empirical study by Peter, scale and found it confusing with service satisfaction.
Churchill and Brown (1993) found difference scores being They, therefore, opined that expectation (E) component
beset with psychometric problems and, therefore, cau- of SERVQUAL be discarded and instead performance
tioned against the use of (P-E) scores. (P) component alone be used. They proposed what is
Validity of (P-E) measurement framework has also referred to as the ‘SERVPERF’ scale. Besides theoretical
come under attack due to problems with the conceptu- arguments, Cronin and Taylor (1992) provided empir-
alization and measurement of expectation component of ical evidence across four industries (namely banks, pest
the SERVQUAL scale. While perception (P) is definable control, dry cleaning, and fast food) to corroborate the
and measurable in a straightforward manner as the con- superiority of their ‘performance-only’ instrument over
sumer’s belief about service is experienced, expectation disconfirmation-based SERVQUAL scale.
(E) is subject to multiple interpretations and as such has Being a variant of the SERVQUAL scale and con-
been operationalized differently by different authors/ taining perceived performance component alone, ‘per-
researchers (e.g., Babakus and Inhofe, 1991; Brown and formance only’ scale is comprised of only 22 items. A
Swartz, 1989; Dabholkar et al., 2000; Gronroos, 1990; higher perceived performance implies higher service
Teas, 1993, 1994). Initially, Parasuraman, Zeithaml and quality. In equation form, it can be expressed as:
Berry (1985, 1988) defined expectation close on the lines k
of Miller (1977) as ‘desires or wants of consumers,’ i.e., SQi = ∑ Pij (2)
what they feel a service provider should offer rather than j =1

would offer. This conceptualization was based on the where: SQi = perceived service quality of indi-
reasoning that the term ‘expectation’ has been used vidual ‘i’
differently in service quality literature than in the cus- k = number of attributes/items
tomer satisfaction literature where it is defined as a P = perception of individual ‘i’ with
prediction of future events, i.e., what customers feel a respect to performance of a service
service provider would offer. Parasuraman, Berry and firm on attribute ‘j’
Zeithaml (1990) labelled this ‘should be’ expectation as Methodologically, the SERVPERF scale represents
‘normative expectation,’ and posited it as being similar marked improvement over the SERVQUAL scale. Not
to ‘ideal expectation’ (Zeithaml and Parasuraman, 1991). only is the scale more efficient in reducing the number
Later, realizing the problem with this interpretation, of items to be measured by 50 per cent, it has also been
they themselves proposed a revised expectation (E*) empirically found superior to the SERVQUAL scale for
measure, i.e., what the customer would expect from being able to explain greater variance in the overall
‘excellent’ service (Parasuraman, Zeithaml and Berry, service quality measured through the use of single-item
1994). scale. This explains the considerable support that has
It is because of the vagueness of the expectation emerged over time in favour of the SERVPERF scale
concept that some researchers like Babakus and Boller (Babakus and Boller, 1992; Bolton and Drew, 1991b;
(1992), Bolton and Drew (1991a), Brown, Churchill and Boulding et al., 1993; Churchill and Surprenant, 1982;
Peter (1993), and Carman (1990) stressed the need for Gotlieb, Grewal and Brown, 1994; Hartline and Ferrell,
developing a methodologically more precise scale. The 1996; Mazis, Antola and Klippel, 1975; Woodruff, Ca-
SERVPERF scale — developed by Cronin and Taylor dotte and Jenkins, 1983). Though still lagging behind the
(1992) — is one of the important variants of the SERV- SERVQUAL scale in application, researchers have in-
QUAL scale. For, being based on the perception com- creasingly started making use of the performance-only
ponent alone, it has been conceptually and methodolog- measure of service quality (Andaleeb and Basu, 1994;
ically posited as a better scale than the SERVQUAL scale Babakus and Boller, 1992; Boulding et al., 1993; Brady
which has its origin in disconfirmation paradigm. et al., 2002; Cronin et al., 2000; Cronin and Taylor, 1992,
1994). Also when applied in conjunction with the SERV-
SERVPERF Scale
QUAL scale, the SERVPERF measure has outperformed
Cronin and Taylor (1992) were amongst the researchers the SERVQUAL scale (Babakus and Boller, 1992; Brady,
who levelled maximum attack on the SERVQUAL scale. Cronin and Brand, 2002; Cronin and Taylor, 1992;

28 MEASURING SERVICE QUALITY: SERVQUAL vs. SERVPERF SCALES

28
Dabholkar et al., 2000). Seeing its superiority, even the two scales. The diagnostic ability of the scales has
Zeithaml (one of the founders of the SERVQUAL scale) not been explicitly explicated and empirically investi-
in a recent study observed that “…Our results are in- gated. The psychometric and methodological aspects of
compatible with both the one-dimensional view of ex- a scale are no doubt important considerations but one
pectations and the gap formation for service quality. cannot overlook the assessment of the diagnostic power
Instead, we find that perceived quality is directly influ- of the scales. From the strategy formulation point of
enced only by perceptions (of performance)” (Boulding view, it is rather the diagnostic ability of the scale that
et al., 1993). This admittance cogently lends a testimony can help managers in ascertaining where the quality
to the superiority of the SERVPERF scale. shortfalls prevail and what possibly can be done to close
down the gaps.
Service Quality Measurement:
Unweighted and Weighted Paradigms
METHODOLOGY
The significance of various quality attributes used in the The present study is an attempt to make a comparative
service quality scales can considerably differ across assessment of the SERVQUAL and the SERVPERF scales
different types of services and service customers. Secu- in the Indian context in terms of their validity, ability
rity, for instance, might be a prime determinant of quality to explain variance in the overall service quality, power
for bank customers but may not mean much to customers to distinguish among service objects/firms, parsimony
of a beauty parlour. Since service quality attributes are in data collection, and, more importantly, their diagnos-
not expected to be equally important across service tic ability to provide insights for managerial interven-
industries, it has been suggested to include importance tions in case of quality shortfalls. Data for making com-
weights in the service quality measurement scales (Cronin parisons among the unweighted and weighted versions
and Taylor, 1992; Parasuraman, Zeithaml and Berry, of the two scales were collected through a survey of the
1995, 1998; Parasuraman, Berry and Zeithaml, 1991; consumers of the fast food restaurants in Delhi. The fast
Zeithaml, Parasuraman and Berry, 1990). While the food restaurants were chosen due to their growing
unweighted measures of the SERVQUAL and the familiarity and popularity with the respondents under
SERVPERF scales have been described above vide equa- study. Another reason was that the fast food restaurant
tions (1) and (2), the weighted versions of the SERV- services fall mid way on the ‘pure goods - pure service’
QUAL and the SERVPERF scales as proposed by Cronin continuum (Kotler, 2003). Seldom are the extremes found
and Taylor (1992) are as follows: in most service businesses. For ensuring a greater gen-
k
eralizability of service quality scales, it was considered
SQ i = ∑ I ij (Pij − E ij ) (3) desirable to select a service offering that is comprised
j=1
of both the good (i.e., food) and service (i.e., preparation
k and delivery of food) components. Eight fast food res-
SQ i = ∑ I ij (Pij ) (4) taurants (Nirulas, Wimpy, Dominos, McDonald, Pizza
j=1
Hut, Haldiram, Bikanerwala, and Rameshwar) rated as
where: Iij is the weighting factor, i.e., importance more familiar and patronized restaurants in different
of attribute ‘j’ to an individual ‘i.’ parts of Delhi in the pilot survey were selected.
Though, on theoretical grounds, addition of weights Using the personal survey method, 300 students
makes sense (Bolton and Drew, 1991a), not much im- and lecturers of different colleges and departments of
provement in the measurement potency of either scale the University of Delhi spread all over the city of Delhi
has been reported after inclusion of importance weights. were approached. The field work was done during
Between weighted versions of two scales, weighted December 2001-March 2002. After repeated follow-ups,
SERVPERF scale has been theoretically posited to be only 200 duly filled-in questionnaires could be collected
superior to weighted SERVQUAL scale (Bolton and Drew, constituting a 67 per cent response rate. The sample was
1991a). deliberately restricted to students and lecturers of Delhi
As pointed out earlier, one major problem with the University and was equally divided between these two
past studies has been their preoccupation with assess- groups. The idea underlying the selection of these two
ment of psychometric and methodological soundness of categories of respondents was their easy accessibility.

VIKALPA • VOLUME 29 • NO 2 • APRIL - JUNE 2004 29

29
Quota sampling was employed for selecting respon- of each item. Responses to behavioural intention items
dents from these two groups. Each respondent was asked were obtained using a 5-point Likert scale ranging from
to give information about two restaurants — one ‘most ‘1’ for ‘very low’ to ‘5’ for ‘very high.’
frequently visited’ and one ‘least frequently visited.’ At
the analysis stage, collected data were pooled together FINDINGS AND DISCUSSION
thus constituting a total of 400 responses.
Validity of Alternative Measurement Scales
Parasuraman, Zeithaml and Berry’s (1988) 22-item
SERVQUAL instrument was employed for collecting the As suggested by Churchill (1979), convergent and dis-
data regarding the respondents’ expectations, percep- criminant validity of four measurement scales was
tions, and importance weights of various service at- assessed by computing correlations coefficients for dif-
tributes. Wherever required, slight modifications in the ferent pairs of scales. The results are summarized in
wording of scale items were made to make the question- Table 1. The presence of a high correlation between
naire understandable to the surveyed respondents. Some alternate measures of service quality is a pointer to the
of the items were negatively worded to avoid the prob- convergent validity of all the four scales. The SERVPERF
lem of routine ticking of items by the respondents. In scale is, however, found having a stronger correlation
addition to the above mentioned 66 scale items (22 each with other similar measures, viz., SERVQUAL and
for expectations, perceptions, and importance rating), importance weighted service quality measures.
the questionnaire included items relating to overall A higher correlation found between two different
quality, overall satisfaction, and behavioural intentions measures of the same variable than that found between
of the consumers. These items were included to assess the measure of a variable and other variable implies the
the validity of the multi-item service quality scales used presence of discriminant validity (Churchill, 1979) in
at our end. The single-item direct measures of overall respect of all the four multi-item service quality scales.
service quality, namely, ‘overall quality of these restau- Once again, it is the SERVPERF scale which is found
rants is excellent’ and overall satisfaction, namely, ‘over- possessing the highest discriminant validity.
all I feel satisfied with the services provided’ were used. SERVPERF is, thus, found providing a more con-
Cronin and Taylor (1992) have used similar measures vergent as well as discriminant valid explanation of
for assessing validity of multi-item service quality scales. service quality.
Behavioural intentions were measured with the help of
Explanatory Power of Alternative
a 3-item scale as suggested by Zeithaml and Parasur-
Measurement Scales
aman (1996) and later used by Brady and Robertson
(2001) and Brady,Cronin and Brand (2002).3 The ability of a scale to explain the variation in the
Excepting importance weights and behavioural overall service quality (measured directly through a
items, responses to all the scale items were obtained on single-item scale) was assessed by regressing respond-
a 5-point Likert scale ranging from ‘5’ for ‘strongly agree’ ents’ perceptions of overall service quality on its corre-
to ‘1’ for ‘strongly disagree.’ A 4-point Likert scale sponding multi-item service quality scale. Adjusted R 2
anchored on ‘4’ for ‘very important’ and ‘1’ for ‘not values reported in Table 2 clearly point to the superiority
important’ was used for measuring importance weights of SERVPERF scale for being able to explain greater

Table 1: Alternate Service Quality Scales and Other Measures — Correlation Coefficients

SERVQUAL SERVPERF Weighted Weighted Overall Overall Behavioural


(P-E) (P) SERVQUAL SERVPERF Service Satisfaction Intentions
I (P-E) I (P) Quality
SERVQUAL (P-E) 1.000
SERVPERF (P) .735 -
Weighted SERVQUAL I(P-E) .995 .767 -
Weighted SERVPERF I(P) .759 .993 .772 -
Overall service quality .416 .544 .399 .531 -
Overall satisfaction .420 .557 .425 .554 .724 -
Behavioural intentions .293 .440 .308 .459 .570 .528 1.000

30 MEASURING SERVICE QUALITY: SERVQUAL vs. SERVPERF SCALES

30
Table 2: Explanatory Power of Alternative Service Scales Parsimony in Data Collection
— Regression Results
Often, ease of data collection is a major consideration
Measurement Scale R2 Adjusted R2
(Independent Variable) governing the choice of measurement scales for studies
SERVQUAL (P-E) .173 .171 in the business context. When examined from this per-
SERVPERF (P) .296 .294 spective, the unweighted performance-only scale turns
Weighted SERVQUAL I(P-E) .159 .156
out to be the best choice as it requires much less infor-
Weighted SERVPERF I(P) .282 .280
mational input than required by the other scales. While
Note: Dependent variable = Overall service quality.
the SERVQUAL and weighted service quality scales
proportion of variance (0.294) in the overall service quality (both SERVQUAL and the SERVPERF) require data on
than is the case with other scales. Addition of importance customer perceptions as well as customer expectations
weights is not able to enhance the explanatory power and/or importance perceptions also, the performance-
of the SERVPERF and the SERVQUAL scales. The results only measure requires data on customers’ perceptions
of the present study are quite in conformity with those alone, thus considerably obviating the data collection
of Cronin and Taylor (1992) who also found addition of task. While the number of items for which data are
importance weight not improving the predictive ability required is only 22 for the SERVPERF scale, it is 44 and
of either scale. 66 for the SERVQUAL and the weighted SERVQUAL
scales respectively (Table 4). Besides making the ques-
Discriminatory Power of Alternative
tionnaire lengthy and compounding data editing and
Measurement Scales
coding tasks, requirement of additional data can have
One basic use of a service quality scale is to gain insight its toll on the response rate too. This study is a case in
as to where a particular service firm stands vis-à-vis point. Seeing a lengthy questionnaire, many respon-
others in the market. The scale that can best differentiate dents hesitated to fill it up and returned it on the spot.
among service firms obviously represents a better choice.
Diagnostic Ability of Scales in Providing
Mean quality scores for each restaurant were computed
Insights for Managerial Intervention
and compared with the help of ANOVA technique to
and Strategy Formulation
delve into the discriminatory power of alternative meas-
urement scales. The results presented in Table 3 show A major reason underlying the use of a multi-item scale
significant differences (p < .000) existing among mean vis-à-vis its single-item counterpart is its ability to pro-
service quality scores for each of the alternate scales. The vide information about the attributes where a given firm
results are quite in line with those obtained by using is deficient in providing service quality and thus needs
single-item measures of service quality. The results thus to evolve strategies to remove such quality shortfalls
establish the ability of all the four scales to be able to with a view to enhance customer satisfaction in future.
discriminate among the objects (i.e., restaurants), and as When judged from this perspective, all the four service
such imply that any one of the scales can be used for quality scales, being multi-item scales, appear capable
making quality comparisons across service firms. of performing the task. But, unfortunately, the scales

Table 3: Discriminatory Power of Alternate Scales — ANOVA Results

Restaurant SERVPERF Weighted SERVQUAL Weighted Overall


(P) SERVPERF I (P) (P-E) SERVQUAL I (P-E) Service Quality
Nirulas 3.63 3.67 -0.28 -0.31 4.04
Wimpy’s 3.41 3.44 -0.64 -0.58 3.46
Dominos 3.40 3.50 -0.45 -0.41 3.52
McDonalds 3.72 3.78 -0.21 -0.20 4.23
Pizza Hut 3.64 3.72 -0.24 -0.29 4.00
Haldiram 3.55 3.51 -0.40 -0.50 3.72
Bikanerwala 3.38 3.41 -0.57 -0.62 3.65
Rameshwar 3.19 3.30 -0.58 -0.58 3.19
Overall mean 3.55 3.61 -0.37 -0.38 3.86
F-value (significance level) 6.60 5.31 4.25 3.40 6.77
(p <.000) (p <.000) (p <.000) (p <.002) (p<.000)

VIKALPA • VOLUME 29 • NO 2 • APRIL - JUNE 2004 31

31
Table 4: Number of Items Contained in Service Quality (scores lower in magnitude pointing to higher priority
Measurement Scales
for intervention) or on the basis of magnitude of the
Scale Number of Items implied gap scores between perceived performance (P)
SERVQUAL (P-E) 44 and maximally attainable score of 5 (with higher gaps
SERVPERF (P) 22 implying immediate interventions). Judged anyway, the
Weighted SERVQUAL I(P-E) 66
service areas in the descending order of intervention
Weighted SERVPERF I(P) 44
urgency are 11, 22, 13, and 1 (see columns 3 and 5). The
differ considerably in terms of the areas identified for management can pick up one or a few areas for man-
improvement as well as the order in which the identified agerial intervention depending upon the availability of
areas need to be taken up for quality improvement. This time and financial resources at its disposal. If importance
asymmetrical power of the four scales can be probed into scores are also taken into account as is the case with the
by taking up four typical service attributes, namely, use weighted SERVPERF scale, the order of priority gets
of up-to-date equipment and technology, prompt res- changed to 11, 13, 22, and 1.
ponse, accuracy of records, and convenience of operat- In the case of the SERVQUAL scale requiring com-
ing hours as being tapped in the study vide scale items parison of customers’ perceptions of service perform-
1, 11, 13, and 22 respectively. The performance of a ance (P) with their expectations (E), the areas with zero
restaurant (name disguised) on these four scale items or positive gaps imply either customer satisfaction or
is reported in Table 5. delight with the service provision and as such do not
An analysis of Table 5 reveals the following find- call for any managerial intervention. But, in the areas
ings. When measured with the help of ‘performance- where gaps are negative, the management needs to do
only’ (i.e., SERVPERF) scale, scores in column 3 show something urgently for improving the quality. When
that the restaurant is providing quality in respect of viewed from this perspective, only three service areas,
service items 1, 13, and 22. The mean scores in the range namely, 13, 11, and 1 having negative gaps, call for
of 3.31 to 3.97 for these items are a pointer to this in- managerial intervention and in that order as determined
ference. The consumers appear indifferent to the pro- by the magnitude of gap scores shown in column 9 of
vision of service quality in respect of item 11. However, Table 5. Taking into account the importance scores also
when compared with maximum possible attainable value as is the case with the weighted SERVQUAL scale, order
of 5 on a 5-point scale, the restaurant under consider- of priority areas gets changed to 11, 13, and 1 (see column
ation seems deficient in respect of all the four service 10).
areas (column 5) implying managerial intervention in We thus find that though all the four multi-item
all these areas. In the event of time and resource con- scales possess diagnostic power to suggest areas for
straints, however, the management needs to prioritize managerial actions, the four scales differ considerably
quality deficient areas. This can be done in two ways: in terms of areas suggested as well as the order in which
either on the basis of magnitude of performance scores the actions in the identified areas are called for. The moot

Table 5: Areas Suggested for Quality Improvement by Alternate Service Quality Scales

Scale Item Performance Maximum Gap Importance I(P) or Expectation Gap (P-E) I(P-E) or
Item Description (P) or Score (P-M) Score (I) Weighted Score or Weighted
SERVPERF SERVPERF (E) SERVQUAL SERVQUAL
Score Score
(1) (2) (3) (4) (5) (6) (7) (8) (9) (10)
1. Use of up-to-date
equipment and technology 3.97 5.00 -1.03 4.28 16.99 4.37 -0.40 -1.71
11. Prompt response 3.08 5.00 -1.92 4.09 12.60 3.57 -0.49 -6.01
13. Accuracy of records 3.51 5.00 -1.49 3.67 12.88 4.05 -0.54 -1.98
22. Operating hours
convenient to all 3.31 5.00 -1.69 4.05 13.37 3.04 -0.27 -1.09
Action areas in
order of priority 11, 22, 13, 1 11, 22, 13, 1 11, 13, 22, 1 13, 11, 1 11, 13, 1
Note: Customer expectations, perceptions, and importance for each service quality item were measured on a 5-point Likert scale ranging
from 5 for ‘strongly agree’ to 1 for ‘strongly disagree.’

32 MEASURING SERVICE QUALITY: SERVQUAL vs. SERVPERF SCALES

32
point, therefore, is to determine which scale provides The customers are already getting more than their ex-
a more pragmatic and managerially useful diagnosis. pectations; any attempt to further improve the perform-
From a closer perusal of the data provided in Table 4, ance in this area might drain the restaurant owner of
it may be observed that the problem of different areas the resources needed for improvement in other critical
and different ordering suggested by the four scales is areas. Any such effort, moreover, is unlikely to add to
coming up basically due to different reference points the customers’ delight as the customers themselves are
used explicitly or implicitly for computing the service not desirous of having more of this service attribute as
quality shortfalls. While it is the maximally attainable revealed by their mean expectation score which is much
score of 5 on a 5-point scale that presumably is serving lower than the ideally and maximally attainable score
as a reference point in the case of the SERVPERF scale, of 5.
it is customer expectation for each of the service area If importance scores are also taken into consider-
that is acting as a yardstick under the SERVQUAL scale. ation, the weighted versions of both the scales provide
Ideally speaking, the management should strive for at- much more useful insights than those provided by the
taining the maximally attainable the performance level unweighted counterparts. Be it the SERVQUAL or the
(a score of 5 in the case of 5-point scale) in all those SERVPERF scale, the inclusion of weights does represent
service areas where the performance level is less than improvement over the unweighted measures. By incor-
5. This is exactly what the SERVPERF scale-based anal- porating the customer perceptions of the importance of
ysis purports to do. However, this is tenable only under different service attributes in the analysis, the weighted
situations when there are no time and resource con- service quality scales are able to more precisely direct
straints and it can be assumed that all the areas are managerial attention to deficient areas which are more
equally important to customers and they want maximal- critical from the customers’ viewpoint and as such need
ly possible quality level in respect of each of the service to be urgently attended to. It may, furthermore, be
attributes. But, in a situation where the management observed that between the weighted versions of the
works under resource constraints (this usually is the SERVPERF and the SERVQUAL scales, the weighted
case) and consumers do not equally importantly want SERVQUAL scale is much more superior in its diagnos-
maximum possible service quality provision, the man- tic power. This scale takes into account not only the
agement needs to identify areas which are more critical magnitude of customer defined service quality gaps but
from the consumers’ point of view and call for imme- also the importance weights that customers assign to
diate attention. This is exactly what the SERVQUAL different service attributes, thus pointing to such service
scale does by pointing to areas where firm’s performance quality shortfalls as are crucial to a firm’s success in the
is below the customers’ expectations. market and deserve immediate managerial intervention.
Between the two scales, therefore, the SERVQUAL
scale stands to provide a more pragmatic diagnosis of CONCLUSIONS, IMPLICATIONS, AND
the service quality provision than the SERVPERF scale. 4 DIRECTIONS FOR FUTURE RESEARCH
So long as perceived performance equals or exceeds A highly contentious issue examined in this paper re-
customer expectations for a service attribute, the SERV- lates to the operationalization of service quality con-
QUAL scale does not point to managerial intervention struct. A review of extant literature points to SERV-
despite performance level in respect to that attribute QUAL and SERVPERF as being the two most widely
falling short of the maximally attainable service quality advocated and applied service quality scales. Notwith-
score. Service area 22 is a case in point. As per the standing a number of researches undertaken in the field,
SERVPERF scale, this is also a fitting area for managerial it is not yet clear as to which one of the two scales is
intervention because the perceived performance level in a better measure of service quality. Since the focus of
respect of this attribute is far less than the maximally the past studies has been on an assessment of the psy-
attainable value of 5. This, however, is not the case with chometric and methodological soundness alone of the
the SERVQUAL scale. Since the customer perceptions service quality scales — and that too in the context of
of a restaurant’s performance are above their expecta- the developed world — this study represents a pioneer-
tion level, there seems to be no ostensible justification ing effort towards evaluating the methodological sound-
in further trying to improve the performance in this area. ness as well as the diagnostic power of the two scales

VIKALPA • VOLUME 29 • NO 2 • APRIL - JUNE 2004 33

33
in the context of a developing country — India. A survey constitutes a better choice. Since it entails a direct com-
of the consumers of the fast food restaurants in the Delhi parison of performance perceptions with customer ex-
was carried out to gather the necessary information. The pectations, it provides a more pragmatic diagnosis of
unweighted as well as the weighted versions of the service quality shortfalls. Especially in the event of time
SERVQUAL and the SERVPERF scales were compara- and resource constraints, the SERVQUAL scale is able
tively assessed in terms of their convergent and discri- to direct managerial attention to service areas which are
minant validity, ability to explain variation in the overall critically deficient from the customers’ viewpoint and
service quality, ease in data collection, capacity to dis- require immediate attention. No doubt, the SERVQUAL
tinguish restaurants on quality dimension, and diagnos- scale entails greater data collection work, but it can be
tic capability of providing directions for managerial eased out by employing direct rather than computed
interventions in the event of service quality shortfalls. expectation disconfirmation measures. This can be done
So far as the assessment of various scales on the first by asking customers to directly report about the extent
three parameters is concerned, the unweighted perform- they feel a given firm has performed in comparison to
ance-only measure (i.e., the SERVPERF scale) emerges their expectations in respect of each service attribute
as a better choice. It is found capable of providing a more rather than asking them to report their perception and
convergent and discriminant valid explanation of serv- expectation scores separately as is required under the
ice quality construct. It also turns out to be the most SERVQUAL scale (for a further discussion on this aspect,
parsimonious measure of service quality and is capable see Dabholkar, Shepherd and Thorpe, 2000).
of explaining greater proportion of variance present in The addition of importance weights further adds to
the overall service quality measured through a single- the diagnostic power of the SERVQUAL scale. Though
item scale. the inclusion of weights improves the diagnostic ability
The addition of importance weights, however, does of even the SERVPERF scale, the scale continues to suffer
not result in a higher validity and explanatory power from its generic weakness of directing managerial atten-
of the unweighted SERVQUAL and SERVPERF scales. tion to such service areas which are not at all deficient
These findings are quite in conformity with those of in the customer’s perception.
earlier studies recommending the use of unweighted In overall terms, we thus find that while the
perception-only scores (e.g., Bolton and Drew, 1991b; SERVPERF scale is a more convergent and discriminant
Boulding et al., 1993; Churchill and Surprenant, 1982; valid explanation of the service construct, possesses
Cronin, Brady and Hult, 2000; Cronin and Taylor, 1992). greater power to explain variations in the overall service
When examined from the point of view of the power quality scores, and is also a more parsimonious data
of various scales to discriminate among the objects (i.e., collection instrument, it is the SERVQUAL scale which
restaurants in the present case), all the four scales stand entails superior diagnostic power to pinpoint areas for
at par in performing the job. But in terms of diagnostic managerial intervention. The obvious managerial impli-
ability, it is the SERVQUAL scale that emerges as a clear cation emanating from the study findings is that when
winner. The SERVPERF scale, notwithstanding its su- one is interested simply in assessing the overall service
periority in other respects, turns out to be a poor choice. quality of a firm or making quality comparisons across
For, being based on an implied comparison with the service industries, one can employ the SERVPERF scale
maximally attainable scores, it suggests intervention because of its psychometric soundness and instrument
even in areas where the firm’s performance level is parsimoniousness. However, when one is interested in
already up to customer’s expectations. The incorpora- identifying the areas of a firm’s service quality shortfalls
tion of expectation scores provides richer information for managerial interventions, one should prefer the
than that provided by the perception-only scores thus SERVQUAL scale because of its superior diagnostic
adding to the diagnostic power of the service quality power.
scale. Even the developers of performance-only scale No doubt, the use of the weighted SERVQUAL scale
were cognizant of this fact and did not suggest that it is the most appropriate alternative from the point of
is unnecessary to measure customer expectations in serv- view of the diagnostic ability of various scales, yet a final
ice quality research (Cronin and Taylor, 1992). decision in this respect needs to be weighed against the
From a diagnostic perspective, therefore, (P-E) scale gigantic task of information collection. Following Cro-

34 MEASURING SERVICE QUALITY: SERVQUAL vs. SERVPERF SCALES

34
nin and Taylor’s (1992) approach, one requires collecting with larger sample sizes need to be replicated in differ-
information on importance weights for all the 22 scale ent service industries in different countries — especially
items thus considerably increasing the length of the in the developing ones — to ascertain applicability and
survey instrument. However, alternative approaches do superiority of the alternate service quality scales.
exist that can be employed to overcome this problem. Dimensionality, though an important consideration
One possible alternative is to collect information from the point of view of both the validity and reliability
about the importance weights at the service dimension assessment, has not been investigated in this paper due
rather than the individual service level. This can be to space limitations. It is nonetheless an important issue
accomplished by first doing a pilot survey of the re- in itself and needs to be thoroughly examined before
spondents using 44 SERVQUAL scale items and then coming to a final judgment about the superiority of the
performing a factor analysis on the collected data for service quality scales. It is quite possible that the con-
identifying service dimensions. Once the service dimen- clusions of the present study might change if the dimen-
sions are identified, a final survey of all the sample sionality angle is incorporated into the analysis. Studies
respondents can be done for seeking information in in future may delve into this aspect.
respect of the 44 scale items as well as for the importance One final caveat relates to the limited power of both
weights for each of the service quality dimensions iden- the unweighted and the weighted versions of the SERV-
tified during the pilot survey stage. Addition of one QUAL and the SERVPERF scales to explain variations
more question seeking importance information will only present in the overall service quality scores assessed
slightly increase the questionnaire size. The importance through the use of a single-item scale. This casts doubts
information so gathered can then be used for prioritizing on the applicability of multi-item service quality scales
the quality deficient service areas for managerial inter- as propounded and tested in the developed countries
vention. Alternatively, one can employ the approach to the service industries in a developing country like India.
adopted by Parasuraman, Zeithaml and Berry (1988). Though regressing overall service quality scores on
Instead of directly collecting information from the re- service quality dimensions might somewhat improve the
spondents, they derived importance weights by regress- explanatory power of these scales, we do not expect any
ing overall quality perception scores on the SERVQUAL appreciable improvement in the results. The poor explan-
scores for each of the dimensions identified through the atory power of the scales in the present study might have
use of factor analysis on the data collected vide 44 scale arisen either due to methodological considerations such
items. Irrespective of the approach used, the data col- as the use of a smaller sample or a 5-point rather than
lection task will be much simpler than required as per a 7-point Likert scale employed by the developers of
the approach employed by Cronin and Taylor (1992) for service quality scales in their studies or else — as is more
gathering data in connection with the weighted SERV- likely to be the case — the problem has arisen due to
QUAL scale. the inappropriateness of items contained in the service
Though the study brings to the fore interesting quality scales under investigation in the context of the
findings, it will not be out of place to mention here some developing countries. Both these aspects need to be
of its limitations. A single service setting with a few thoroughly examined in future researches so as to be able
restaurants under investigation and a small database of to arrive at a psychometrically as well as managerially
only 400 observations preclude much of the generali- more useful service quality scale for use in the service
zability of the study findings. Studies of similar kind industries of the developing countries.

ENDNOTES
1. Customer satisfaction with services or perception of further discussion, see Churchill and Surprenant, 1982
service quality can be viewed as confirmation or dis- and Parasuraman, Zeithaml and Berry, 1985.
confirmation of customer expectations of a service offer. 2. A factor analysis of 22 scale items led Parasuraman,
The proponents of the gap model have based their Zeithaml and Berry (1988) to conclude that consumers
researches on disconfirmation paradigm which main- use five dimensions for evaluating service quality. The
tains that satisfaction is related to the size and direction five dimensions identified by them included tangibility,
of the disconfirmation experience where disconfirma- reliability, responsiveness, assurance, and empathy.
tion is related to the person’s initial expectations. For 3. The scale items used in this connection were: “The

VIKALPA • VOLUME 29 • NO 2 • APRIL - JUNE 2004 35

35
probability that I will use their facilities again,” “The tween (P-M) and (P-E) gap scores, the former cannot
likelihood that I would recommend the restaurants to be used as a substitute for the latter as on a case by case
a friend,” and “If I had to eat in a fast food restaurant basis, it can point to initiating actions even in such areas
again, the chance that I would make the same choice.” which do not need any managerial intervention based
4. Even though a high correlation (r=0.747) existed be- on (P-E) scores.

REFERENCES
Andaleeb, S S and Basu, A K (1994). “Technical Complexity Churchill, G A and Surprenant, C (1982). “An Investigation
and Consumer Knowledge as Moderators of Service into the Determinants of Customer Satisfaction,” Journal
Quality Evaluation in the Automobile Service Indus- of Marketing Research, 19(November), 491-504.
try,” Journal of Retailing, 70(4), 367-81. Cronin, J and Taylor, S A (1992). “Measuring Service Quality:
Anderson, E W, Fornell, C and Lehmann, D R (1994). A Reexamination and Extension,” Journal of Marketing,
“Customer Satisfaction, Market Share and Profitability: 56(July), 55-67.
Findings from Sweden,” Journal of Marketing, 58(3), 53- Cronin, J and Taylor, S A (1994). “SERVPERF versus SERV-
66. QUAL: Reconciling Performance-based and Perceptions–
Anderson, C and Zeithaml, C P (1984). “Stage of the Pro- Minus–Expectations Measurement of Service Quality,”
duct Life Cycle, Business Strategy, and Business Per- Journal of Marketing, 58(January), 125-31.
formance,” Academy of Management Journal, 27 (March), Cronin, J, Brady, M K and Hult, T M (2000). “Assessing
5-24. the Effects of Quality, Value and Customer Satisfaction
Babakus, E and Boller, G W (1992). “An Empirical Assess- on Consumer Behavioral Intentions in Service Environ-
ment of the Servqual Scale,” Journal of Business Research, ments,” Journal of Retailing, 76(2), 193-218.
24(3), 253-68. Crosby, P B (1984). Paper presented to the “Bureau de
Babakus, E and Mangold, W G (1989). “Adapting the Serv- Commerce,” Montreal, Canada (Unpublished), Novem-
qual Scale to Hospital Services: An Empirical Investi- ber.
gation,” Health Service Research, 26(6), 767-80. Dabholkar, P A, Shepherd, D C and Thorpe, D I (2000).
Babakus, E and Inhofe, M (1991). “The Role of Expectations “A Comprehensive Framework for Service Quality: An
and Attribute Importance in the Measurement of Serv- Investigation of Critical, Conceptual and Measurement
ice Quality” in Gilly M C (ed.), Proceedings of the Summer Issues through a Longitudinal Study,” Journal of Retail-
Educator’s Conference, Chicago, IL: American Marketing ing, 76(2), 139-73.
Association, 142-44. Eiglier, P and Langeard, E (1987). Servuction, Le Marketing
Bolton, R N and Drew, J H (1991a). “A Multistage Model des Services, Paris: McGraw-Hill.
of Customer’s Assessment of Service Quality and Va- Finn, D W and Lamb, C W (1991). “An Evaluation of the
lue,” Journal of Consumer Research, 17(March), 375-85. SERVQUAL Scale in a Retailing Setting” in Holman, R
Bolton, R N and Drew, J H (1991b). “A Longitudinal Ana- and Solomon, M R (eds.), Advances in Consumer Research,
lysis of the Impact of Service Changes on Customer Provo, UT: Association for Consumer Research,480-93.
Attitudes,” Journal of Marketing, 55(January), 1-9. Garvin, D A (1983). “Quality on the Line,” Harvard Business
Boulding, W; Kalra, A, Staelin, R and Zeithaml, V A (1993). Review, 61(September-October), 65-73.
“A Dynamic Process Model of Service Quality: From Gotlieb, J B, Grewal, D and Brown, S W (1994). “Consumer
Expectations to Behavioral Intentions,” Journal of Mar- Satisfaction and Perceived Quality: Complementary or
keting Research, 30(February), 7-27. Divergent Constructs,” Journal of Applied Psychology,
Brady, M K and Robertson, C J (2001). “Searching for a 79(6), 875-85.
Consensus on the Antecedent Role of Service Quality Gronroos, C (1982). Strategic Management and Marketing in
and Satisfaction: An Exploratory Cross-National Study,” the Service Sector. Finland: Swedish School of Economics
Journal of Business Research, 51(1) 53-60. and Business Administration.
Brady, M K, Cronin, J and Brand, R R (2002). “Perfor- Gronroos, C (1990). Service Management and Marketing:
mance–Only Measurement of Service Quality: A Rep- Managing the Moments of Truth in Service Competition.
lication and Extension,” Journal of Business Research, Mass: Lexington Books.
55(1), 17-31. Hartline, M D and Ferrell, O C (1996). “The Management
Brown, T J, Churchill, G A and Peter, J P (1993). “Improving of Customer Contact Service Employees: An Empirical
the Measurement of Service Quality,” Journal of Retail- Investigation,” Journal of Marketing, 69 (October), 52-70.
ing, 69(1), 127-39. Iacobucci, D, Grayson, K A and Ostrom, A L (1994). “The
Brown, S W and Swartz, T A (1989). “A Gap Analysis of Calculus of Service Quality and Customer Satisfaction:
Professional Service Quality,” Journal of Marketing, 53 Theoretical and Empirical Differentiation and Integra-
(April), 92-98. tion,” in Swartz, T A; Bowen, D H and Brown, S W (eds.),
Buzzell, R D and Gale, B T (1987). The PIMS Principles, New Advances in Services Marketing and Management, Green-
York: The Free Press. wich, CT: JAI Press,1-67.
Carman, J M (1990). “Consumer Perceptions of Service Juran, J M (1988). Juran on Planning for Quality. New York:
Quality: An Assessment of the SERVQUAL Dimensions,” The Free Press.
Journal of Retailing, 66(1), 33-35. Kassim, N M and Bojei, J (2002). “Service Quality: Gaps
Churchill, G A (1979). “A Paradigm for Developing Better in the Telemarketing Industry,” Journal of Business Re-
Measures of Marketing Constructs,” Journal of Market- search, 55(11), 845-52.
ing Research, 16 (February), 64-73. Kotler, P (2003). Marketing Management, New Delhi: Pren-

36 MEASURING SERVICE QUALITY: SERVQUAL vs. SERVPERF SCALES

36
tice Hall of India. Rust, R T and Oliver, R L (1994). Service Quality — New
Lewis, R C (1987). “The Measurement of Gaps in the Quality Directions in Theory and Practice, New York: Sage Pub-
of Hotel Service,” International Journal of Hospitality lications.
Management, 6(2), 83-88. Shaw, J (1978). The Quality - Productivity Connection, New
Lewis, B (1991). “Service Quality: An International Com- York: Van Nostrand.
parison of Bank Customer’s Expectations and Percep- Smith, R A and Houston, M J (1982). “Script-based Eva-
tions,” Journal of Marketing Management, 7(1), 47-62. luations of Satisfaction with Services,” in Berry, L,
Mazis, M B, Antola, O T and Klippel, R E (1975). “A Shostack, G and Upah, G (eds.), Emerging Perspectives
Comparison of Four Multi-Attribute Models in the on Services Marketing, Chicago: American Marketing
Prediction of Consumer Attitudes,” Journal of Consumer Association, 59-62.
Research, 2(June), 38-52. Spreng, R A and Singh, A K (1993). “An Empirical Assess-
Miller, J A (1977). “Exploring Satisfaction, Modifying ment of the SERVQUAL Scale and the Relationship
Modes, Eliciting Expectations, Posing Problems, and between Service Quality and Satisfaction,” in Peter, D
Making Meaningful Measurements,” in Hunt, K (ed.), W, Cravens, R and Dickson (eds.), Enhancing Knowledge
Conceptualization and Measurement of Consumer Satisfac- Development in Marketing, Chicago, IL: American Mar-
tion and Dissatisfaction, Cambridge, MA: Marketing keting Association, 1-6.
Science Institute, 72-91. Teas, K R (1993). “Expectations, Performance Evaluation,
Normann, R (1984). Service Management. New York: Wiley. and Consumer’s Perceptions of Quality,” Journal of
Parasuraman, A, Berry, L L and Zeithaml, V A (1990). Marketing, 57(October), 18-34.
“Guidelines for Conducting Service Quality Decrease,” Teas, K R (1994). “Expectations as a Comparison Standard
Marketing Research, 2(4), 34-44. in Measuring Service Quality: An Assessment of Reas-
Parasuraman, A, Berry, L L and Zeithaml, V A (1991). sessment,” Journal of Marketing, 58(January), 132-39.
“Refinement and Reassessment of the SERVQUAL Witkowski, T H and Wolfinbarger, M F (2002). “Compar-
Scale,” Journal of Retailing, 67(4), 420-50. ative Service Quality: German and American Ratings
Parasuraman, A, Zeithaml, V A and Berry, L L (1985). “A across Service Settings,” Journal of Business Research, 55
Conceptual Model of Service Quality and Its Implica- (11), 875-81.
tions for Future Research,” Journal of Marketing, 49 (Fall), Woodruff, R B, Cadotte, E R and Jenkins, R L (1983).
41-50. “Modelling Consumer Satisfaction Processes Using
Parasuraman, A, Zeithaml, V A and Berry, L L (1988). Experience-based Norms,” Journal of Marketing Research,
“SERVQUAL: A Multiple Item Scale for Measuring 20(August), 296-304.
Consumer Perceptions of Service Quality,” Journal of
Young, C, Cunningham, L and Lee, M (1994). “Assessing
Retailing, 64(1), 12-40.
Service Quality as an Effective Management Tool: The
Parasuraman, A, Zeithaml, V A and Berry, L L (1994).
Case of the Airline Industry,” Journal of Marketing Theory
“Reassessment of Expectations as a Comparison Stand-
and Practice, 2(Spring), 76-96.
ard in Measuring Service Quality: Implications for
Zeithaml, V A, Parasuraman, A and Berry, L L (1990).
Further Research,” Journal of Marketing, 58(January),
Delivering Service Quality: Balancing Customer Percep-
111-24.
tions and Expectations, New York: The Free Press.
Peter, J P, Churchill, G A and Brown, T J (1993). “Caution
Zeithaml, V A and Parasuraman, A (1991). “The Nature
in the Use of Difference Scores in Consumer Research,”
and Determinants of Customer Expectation of Service,”
Journal of Consumer Research, 19(March), 655-62.
Marketing Science Institute Working Paper No. 91-113,
Phillips, L W, Chang, D R and Buzzell, R D (1983). “Product
Cambridge, MA: Marketing Science Institute.
Quality, Cost Position and Business Performance: A
Test of Some Key Hypothesis,” Journal of Marketing, 47 Zeithaml, V A and Parasuraman, A (1996). “The Behavioral
(Spring), 26-43. Consequences of Service Quality,” Journal of Marketing,
Pitt, L F, Oosthuizen P and Morris, M H (1992). Service 60(April), 31-46.
Quality in a High Tech Industrial Market: An Application Zeithaml, V A and Bitner, M J (2001). Services Marketing:
of SERVQUAL, Chicago: American Management Asso- Integrating Customer Focus Across the Firms, 2nd Edition,
ciation. Boston: Tata-McGraw Hill.

Sanjay K Jain is Professor of Marketing and International Business Business Analyst, etc. and also presented papers at various national
in the Department of Commerce, Delhi School of Economics, and international conferences.
University of Delhi, Delhi. His areas of teaching and research e-mail: skjaindse@vsnl.net
include marketing, services marketing, international marketing,
and marketing research. He is the author of the book titled Export Garima Gupta is a Lecturer of Commerce in Kamla Nehru Col-
Marketing Strategies and Performance: A Study of Indian Textiles lege, University of Delhi, Delhi. She is currently pursuing her
published in two volumes. He has published more than 70 research doctoral study in the Department of Commerce, Delhi School of
papers in reputed journals including Journal of Global Marketing, Economics, University of Delhi, Delhi.
Malaysian Journal of Small and Medium Enterprises, Vikalpa, e-mail: garimagupta77@yahoo.co.in

VIKALPA • VOLUME 29 • NO 2 • APRIL - JUNE 2004 37

37

You might also like