Arbore 2011

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 25

Journal of Service Management

Rejuvenating importance-performance analysis


Alessandro Arbore Bruno Busacca
Article information:
To cite this document:
Alessandro Arbore Bruno Busacca, (2011),"Rejuvenating importance-performance analysis", Journal of
Service Management, Vol. 22 Iss 3 pp. 409 - 429
Permanent link to this document:
http://dx.doi.org/10.1108/09564231111136890
Downloaded on: 19 March 2016, At: 16:21 (PT)
References: this document contains references to 72 other documents.
To copy this document: permissions@emeraldinsight.com
The fulltext of this document has been downloaded 1928 times since 2011*
Journal of Service Management 2011.22:409-429.

Users who downloaded this article also downloaded:


Jacob K. Eskildsen, Kai Kristensen, (2006),"Enhancing importance-performance analysis",
International Journal of Productivity and Performance Management, Vol. 55 Iss 1 pp. 40-60 http://
dx.doi.org/10.1108/17410400610635499
Daniel I. Prajogo, Peggy McDermott, (2011),"Examining competitive priorities and competitive advantage
in service organisations using Importance-Performance Analysis matrix", Managing Service Quality: An
International Journal, Vol. 21 Iss 5 pp. 465-483 http://dx.doi.org/10.1108/09604521111159780
Nigel Slack, (1994),"The Importance-Performance Matrix as a Determinant of Improvement Priority",
International Journal of Operations & Production Management, Vol. 14 Iss 5 pp. 59-75 http://
dx.doi.org/10.1108/01443579410056803

Access to this document was granted through an Emerald subscription provided by emerald-srm:451335 []
For Authors
If you would like to write for this, or any other Emerald publication, then please use our Emerald for
Authors service information about how to choose which publication to write for and submission guidelines
are available for all. Please visit www.emeraldinsight.com/authors for more information.
About Emerald www.emeraldinsight.com
Emerald is a global publisher linking research and practice to the benefit of society. The company
manages a portfolio of more than 290 journals and over 2,350 books and book series volumes, as well as
providing an extensive range of online products and additional customer resources and services.
Emerald is both COUNTER 4 and TRANSFER compliant. The organization is a partner of the Committee
on Publication Ethics (COPE) and also works with Portico and the LOCKSS initiative for digital archive
preservation.
*Related content and download information correct at time of download.
Journal of Service Management 2011.22:409-429.
The current issue and full text archive of this journal is available at
www.emeraldinsight.com/1757-5818.htm

Rejuvenating
Rejuvenating IPA
importance-performance analysis
Alessandro Arbore and Bruno Busacca
Bocconi University, Milan, Italy 409
Received September 2009
Abstract Revised January 2010,
Purpose – Importance-performance analysis (IPA) is a simple marketing tool commonly used to March 2010,
identify the main strengths and weaknesses of a value proposition. The purpose of this paper is to July 2010
propose a revision of traditional IPA prompted by intuitions arising from the three-factor theory of Accepted August 2010
customer satisfaction. The ultimate goal is to propose a decision support method, which is as simple
and intuitive as the original IPA, but more precise and reliable than the solutions proposed thus far.
Design/methodology/approach – In order to estimate indirect measures of attribute importance,
the study uses the coefficients of a multiple regression with overall satisfaction ratings as the
Journal of Service Management 2011.22:409-429.

dependent variable. Additional calculations are then introduced in order to manage non-linear effects.
Findings – Using empirical data from a survey among 5,209 customers of a European bank, the
authors show how the proposed method can be more accurate than other solutions, especially as
disregarding non-linear effects can prompt sub-optimal marketing decisions.
Research limitations/implications – While the procedure in this study is applicable to any
service business, the paper does not claim external validity for the numerical results of the empirical
application: the authors acknowledge that only one dataset has been used. The authors’ goal is merely
to demonstrate a revised approach to IPA.
Originality/value – First, the authors assert the need for an explicit distinction between the use of
IPA for customer acquisition vs customer retention purposes. These two cases refer to distinct
moments in the customer relationship life cycle and thus require separate analyses. The authors then
propose a specific method for customer retention IPA. On this basis, they generate two priority charts:
one for the purpose of maximizing customer satisfaction and one for the purpose of minimizing
customer dissatisfaction.
Keywords Customer retention, Customer satisfaction, Importance-performance analysis,
Three-factor theory
Paper type Research paper

1. Introduction
Importance-performance analysis (IPA) has been a popular multi-attribute technique
for evaluating marketing actions, as it yields insights into which elements of a value
proposition the management should focus on.
IPA decomposes a value proposition by classifying its most important attributes in
two dimensions, that is, the importance of each attribute and judgments of its
performance (Martilla and James, 1977). Based on average rankings from a sample
of customers, these two elements are combined in order to generate managerial
recommendations (Figure 1).
Identifying the key value drivers underlying a value proposition represents a crucial
issue, which has attracted attention in many disciplines (Conklin et al., 2004). In pointing Journal of Service Management
Vol. 22 No. 3, 2011
out value drivers for the customer, this type of analysis typically suggests the attributes pp. 409-430
which the firm should strengthen as well as those which are over-performing given the q Emerald Group Publishing Limited
1757-5818
current customer evaluations. In this way, IPA may help the firm’s management to DOI 10.1108/09564231111136890
JOSM Extremely important
22,3
A. Concentrate here A. Keep up the good work

410 Fair Excellent


performance performance

C. Low priority D. Possible overkill

Figure 1.
The traditional IPA Slightly important
matrix
Source: Adapted from Martilla and Fames (1977)
Journal of Service Management 2011.22:409-429.

identify critical processes to reengineer (Mazur, 1998). Symmetrically, the analysis can
provide insights on the operational areas where a partial disinvestment of resources
might be considered.
Despite its wide diffusion, the original model exhibits remarkable limitations.
Not surprisingly, the literature has suggested different revisions (e.g. Oliver, 1997;
Brandt, 1998; Matzler et al., 2004; Deng et al., 2008). In many cases, however, the
revised versions also lack the accuracy required for effective decision-making support.
In the sections that follow, we discuss two major issues in this context:
(1) Current revisions of the model make no explicit distinction between two possible
goals of IPA, that is, customer acquisition vs customer retention.
(2) The models sometimes assume symmetric effects between attribute performance
and overall customer metrics such as satisfaction.

First, we argue that it is necessary to distinguish more explicitly between two logical
domains of IPA. This distinction, which has been disregarded in the literature to date,
refers to the use of IPA for customer acquisition vs customer retention purposes. These
two cases clearly focus on distinct moments in the customer relationship life cycle
and thus require distinctive designs. It is important not to confuse the purchasing
decision and the post-purchase evaluation stages for a number of intuitive reasons
which will be discussed in the sections below. Accordingly, important-performance
analysis must be designed and interpreted in two different ways based on its main
goal and relational context. We refer to these two forms of analysis as “customer
acquisition importance-performance analysis” (CA-IPA) and “customer retention
importance-performance analysis” (CR-IPA).
In this paper, we propose a method for CR-IPA with a view to achieving two goals:
(1) To ensure consistency with the intuitions of the three-factor theory, which
points to a non-linear and asymmetric relationship between attribute-level
performance and customer satisfaction with that attribute (Kano, 1984). This, in
turn, implies a non-linear and asymmetric relationship between attribute-level
performance and overall customer satisfaction with the product (Brandt, 1988;
Gale, 1994; Johnston, 1995; Oliver, 1997; Mittal et al., 1998; Anderson and Mittal, Rejuvenating
2000; Matzler and Sauerwein, 2002; Matzler et al., 2004). IPA
(2) To ensure that the resulting decision support method remains as simple,
intuitive, and actionable as the original IPA.

Current approaches to the asymmetric effects pointed out by Kano simply classify
attributes into discrete categories (see next section). However, the proposed method is 411
not intended for classification purposes. This study suggests a different approach:
It estimates, represents, and compares the impacts of an attribute on different levels
of performance and satisfaction without forcing any form of categorization. We
demonstrate our approach using a practical example based on a large dataset from
a European company.
This paper is structured as follows: In the next section, we review the literature
pertaining to IPA and its main limitations. We then propose a revised method for
customer retention IPA. In order to illustrate its application, we present the results of
an empirical study in the ensuing section. Finally, we discuss the main conclusions and
Journal of Service Management 2011.22:409-429.

limitations of the study.

2. Background literature
IPA is an analytical technique that firms use both to evaluate their competitive position
and to set priorities in order to enhance customer satisfaction (Martilla and James,
1977; Hawes and Rao, 1985; Dolinsky and Caputo, 1991; Slack, 1994; Beach and Burns,
1995; Myers, 2001; Matzler et al., 2003, 2004; Deng et al., 2008). The original tool
analyzes a value proposition by classifying its most important attributes in two
dimensions: the importance of each attribute and judgments of its performance. Using
this information from a sample of customers, the analysis plots each attribute on a grid
(see Figure 1 in the previous section). This procedure is based on the assumption that
the two dimensions are orthogonal.
One attractive feature of the tool is its simplicity and the ease with which the results
can be interpreted, meaning that the resulting managerial implications are highly
intuitive. Hansen and Bush (1999), among others, describe IPA as a simple but effective
instrument.
Nonetheless, the literature reports many limitations. Oliver (1997), for example,
points out some confusion as to the most suitable means of calculating and distributing
performance and importance scores along the axes of the matrix. In the same vein,
Brandt (1998) confirms that utilizing different methods to distribute performance
scores generates inconsistent implications. Bacon (2003) joins the debate by providing
a complete review of variations and extensions of the original technique.
One of the most relevant shortcomings of the original model is the implicit
assumption that the relationship between attribute performance and overall customer
satisfaction is linear and symmetric (Matzler et al., 2004). However, both evidence and
theory suggest that this is not necessarily the case. In particular, the theoretical frame
of reference for this criticism is the three-factor theory of customer satisfaction.
Preliminary studies from the 1970s and 1980s were successful in testing a two-factor
theory of customer satisfaction (Swan and Combs, 1976; Maddox, 1981, Cadotte and
Turgeon, 1988). Those authors demonstrate that consumers judge products on the basis
of a limited set of attributes, some of which are relatively important in determining
JOSM consumer satisfaction, while others are not critical in this regard. However,
22,3 unsatisfactory performance in the latter attributes may still lead to dissatisfaction.
Kano (1984) was the first to formalize these intuitions, and his model was later
refined in other studies (Brandt, 1988; Gale, 1994; Johnston, 1995; Oliver, 1997;
Anderson and Mittal, 2000; Matzler and Sauerwein, 2002). Those authors distinguish
among three categories of attributes: basic factors, excitement factors, and
412 performance factors.
Basic factors (or Kano’s “dissatisfiers”) are minimum requirements which do not
have a positive impact on satisfaction if expectations are exceeded, but which generate
dissatisfaction if they are not fulfilled. Excitement factors (or Kano’s “satisfiers”) are
delighting attributes that have no impact on dissatisfaction, but can enhance
satisfaction if delivered. Finally, performance factors (or Kano’s “one-dimensional
factors”) have a symmetric impact on both satisfaction and dissatisfaction in
proportion to their level of fulfillment.
It has been argued that a revised version of IPA is needed (Matzler et al., 2004; Deng
et al., 2008). Since attribute satisfaction can cause a change in attribute importance, it is
Journal of Service Management 2011.22:409-429.

appropriate to estimate the relative impact of each attribute at both high and low levels
of performance.
Since Kano’s work, different methods have been proposed to identify the three
factors. The most common methods are the Kano questionnaire, the critical incident
technique, the analysis of complaints and compliments, the importance grid, and
penalty-reward contrast analysis. Table I shows a synthesis of these contributions and
their possible limitations (adapted from Busacca and Padula, 2005). In a certain sense,
the logic of penalty-reward contrast analysis (Brandt, 1987, 1988) is most comparable
to our proposal, since it uses dummy variables to identify the three factors. In Brandt’s
method, an attribute satisfaction rating is recoded into two dummies which indicate
whether that rating is positive (1, 0), negative (0, 1), or neutral (0, 0), i.e. the reference
category. In order to identify the three factors, multiple regression analysis is
conducted using overall customer satisfaction as the dependent variable and all
dummy variables as independent variables. Not surprisingly, this oversimplification
typically leads to unsatisfactory levels of explained variance (e.g. R 2 and other fit
measures). More importantly, Brandt’s method does not account for the potentially
different contribution of an attribute to explaining high vs low levels of overall
satisfaction. In fact, this method actually implies that the estimated impact of a dummy
would be its impact on the average level of customer satisfaction.
In a more general sense, these procedures are helpful in classifying an attribute as
a “basic”, “exciting”, or “performance” factor, but they do not provide precise
information about the marginal impacts of each attribute on either satisfaction or
dissatisfaction. This is also the case in a method recently designed by Conklin et al.
(2004). The authors use the Shapley value – a tool from cooperative game theory – to
identify key drivers of customer satisfaction. However, in this case as well, the output
is a subset of top dissatisfiers and enhancers taken together as a group. It does not
provide a numerical estimate to compare the marginal impacts of any attribute on both
satisfaction and dissatisfaction. An attribute might exhibit a basic-factor tendency
without being a “top” dissatisfier, while others might tend to act as excitement factors
without being “top” enhancers. By identifying “classes” of attributes, previous
methods have not been able to capture such a continuum.
Journal of Service Management 2011.22:409-429.

Method Authors Results Comments

Kano’s questionnaire Kano (1984) Three-factor theory Used especially often in QFD literature
Berger et al. (1993) supported Reliability and validity not thoroughly tested
Matzler et al. (1996) Weak in practice, since the allocation of attributes
Yang (2005) to categories is based on a frequency distribution
of the responses
Time-consuming and costly method
Critical incident technique Swan and Combs (1976) Two-factor theory supported Widely used in service quality research
Maddox (1981) Three-factor theory External validity questionable since it is based on
supported anecdotes
Silvestro and Johnston Complex method: difficulties in processing and
(1990) analyzing anecdotal materials
Bitner (1990)
Stauss and Hentschel (1992)
Johnston (1995)
Backhaus and Bauer (2008)
Analysis of complaints and Cadotte and Turgeon (1988) Three-factor theory Validity and reliability questionable (see CIT)
compliments supported Bias entailed in the content
Importance grid Vavra (1997) Three-factor theory User-friendly approach
Matzler and Sauerwein supported Validity questionable
(2002) No theory behind the method
Busacca and Padula (2005) Use of explicitly derived importance is a source of
misinterpretation.
The penalty-reward contrast analysis Brandt (1987, 1988) Three-factor theory Uses data from typical satisfaction surveys,
Mittal et al. (1998) supported therefore, no additional data collection is required
Anderson and Mittal (2000) User-friendly approach
Matzler and Sauerwein Typically explains a small amount of overall
(2002) satisfaction
Matzler et al. (2004)
Matzler et al. (2006)
The Shapley value Conklin et al. (2004) Three-factor theory Computational difficulties (sampling of attribute
supported combinations is necessary)
Identifies only the most relevant combination of
dissatisfiers (or key enhancers)
Rejuvenating

of customer satisfaction
Different methods to
identify the three factors
IPA

413

Table I.
JOSM Kano’s categories are extremely useful from a conceptual point of view, but most real
22,3 cases are somewhere in between the extremes. For the purpose of deriving managerial
implications, describing such a continuum would be more powerful than forcing
categorization.
None of the existing techniques are able to represent and compare the marginal
impacts of each attribute on different levels of performance and satisfaction. This
414 study attempts to accomplish such a goal without sacrificing the pragmatic
advantages of semi-linear representation or the intuitiveness of the original IPA.
As mentioned in the introduction, another shortcoming of current IPA approaches is
the undifferentiated use of existing methods for customer acquisition and retention
purposes. We argue that a firm should implement different tools for the purpose of
customer acquisition and customer retention analyses, since these two cases focus on
distinct moments in the customer relationship life cycle. Martilla and James (1977, p. 79)
note that “[d]etermining what attributes to measure is critical, for if evaluative factors
important to the customer are overlooked, the usefulness of importance-performance
analysis will be severely limited”. These “critical attributes” may differ in a purchasing
Journal of Service Management 2011.22:409-429.

decision as compared to a post-purchase evaluation. For instance, according to a


typical IPA on prospective customers, a given attribute might appear to be a trivial
one, thus suggesting the possibility of disinvestment. Eventually, however, that
attribute – which had no effect on the purchasing decision – might prove to have a
relevant impact on post-purchase satisfaction. Even worse, it might even turn out to be
a “basic” factor, that is, one with no effect on satisfaction, but with a tremendous
impact on dissatisfaction if it is omitted. Customers may not pay attention to that
attribute during the purchasing process simply because they take it for granted.
In such cases, traditional IPA would clearly be misleading as a decision-making tool.
We can also imagine the opposite situation: After purchasing a new car, for
example, a customer might discover a number of new and exciting features such as
fancy and unexpected gadgets which enhance post-purchase satisfaction. Those
attributes, on the other hand, might go unnoticed (and thus be considered unimportant)
during the purchasing decision process. Both situations confirm our point: There is a
clear distinction between purchasing drivers and satisfaction drivers, and even when
those drivers are the same, their importance may vary greatly over time. For example,
Mittal and Katrichis (2000) found that “satisfaction with service at the dealership” was
more important than “satisfaction with the vehicle” for new customers, while the
opposite was true after two years of vehicle ownership.
Furthermore, the conceptual difference between CA-IPA and CR-IPA does not refer to
the importance dimension alone, but also refers to the performance dimension. In fact,
during a prospective customer’s purchasing decision process, performances are expected
performances and are compared to the performances expected of other competitors.
Conversely, the performances in post-purchase evaluation are perceived performances
which are implicitly compared to pre-purchase expectations in order to determine
customer satisfaction (Oliver, 1980). The trade-off in this context is well-known in
marketing literature: raising expectations regarding the performance of an attribute will
lead to better results in a customer acquisition IPA but might compromise the position of
the same attribute in a customer retention IPA if those expectations are not fulfilled.
Having clarified the distinction between these two logical domains, the rest of this
paper will focus on CR-IPA. For the importance dimension, many studies recommend
using indirect measures (Dolinsky, 1991; Wittink and Bayer, 1994; Lowenstein, 1995; Rejuvenating
Anton, 1996; Taylor, 1997). Typical indirect measures include the coefficients of a IPA
multiple regression analysis with an overall performance rating as the dependent
variable (e.g. overall customer satisfaction) and specific performance ratings for each
attribute as the independent variables. One important problem in this context is that
implicit importance represents average importance and neglects non-linear
relationships, which stands in stark contrast to the three-factor theory discussed 415
above. Refining these estimates thus becomes a crucial objective.

3. Method
Our method is based on a multiple regression analysis of various attributes against a
dependent variable, in this case customer satisfaction[1]. In order to improve upon
penalty-reward contrast analysis, which estimates the impact of an attribute on the
average level of customer satisfaction, we assess the importance of critical attributes
by distinguishing among the following cases ex-post:
.
the impact of negative attribute performance (i.e. negative ratings) on
Journal of Service Management 2011.22:409-429.

dissatisfied customers; and


. the impact of excellent attribute performance (i.e. top rating scores) on delighted
customers.

In order to estimate these effects, two dummy variables and two interaction terms for
each relevant attribute are introduced into a classic multivariate regression model. The
suggested dependent variable is overall satisfaction.
The first dummy is set to “1” when the following conditions are simultaneously met:
The customer is dissatisfied with the company (e.g. overall customer satisfaction is
below 5 on a Likert-like scale from 1 to 10) and s/he gave negative feedback on the
attribute in question. Symmetrically, the second dummy associated with each attribute
is set to “1” when the customer is delighted with the company (e.g. overall customer
satisfaction is above 8 on a scale of 1 to 10) and s/he provided positive feedback on the
attribute in question.
Finally, two further terms must be added for each attribute. These are interaction
terms which multiply the actual performance of the attribute by the dummy variables
identified above. Through this calculation, the interactive terms provide two
independent adjustments to the traditional coefficient of the attribute. Specifically,
their parameters will provide the following:
.
A corrective adjustment to estimate the extent to which top performance in this
attribute contributes to high levels of customer satisfaction.
.
A corrective adjustment to estimate the extent to which poor performance in this
attribute contributes to low levels of customer satisfaction.

These interaction terms are disregarded in Brandt’s (1987, 1988) approach, which at
least implicitly assumes that there is no difference between the impact of an attribute
in explaining low vs high levels of overall satisfaction. As our results clearly
demonstrate, such an assumption may generate imprecise results.
Compared to the method developed by Conklin et al. (2004), which can be used to
calculate the order of importance among the most relevant combination of dissatisfiers
JOSM (/enhancers), our technique makes it possible to estimate and represent the basic
22,3 (/exciting) impact of each attribute along a continuum.
The regression analysis can be represented by the following formula:

^ i ¼ b0 þ b1j Xji þ b2j D1ji þ b3j Xji D1ji þ b4j D2ji þ b5j Xji D2ji
Y

416 where:
Yi ¼ overall satisfaction expressed by the ith customer.
Xji ¼ performance evaluation of the jth attribute as indicated by the ith customer.
D1ji ¼ 1 if the ith customer indicated a negative evaluation of the jth attribute
(Xji , 5 in our example) and overall dissatisfaction with the bank (Yi , 5 in
our example); otherwise 0.
D2ji ¼ 1 if the ith customer indicated a positive evaluation of the jth attribute
(Xji . 7 in our example) and overall satisfaction with the bank (Yi . 7 in our
Journal of Service Management 2011.22:409-429.

example); otherwise 0.
In the next section, we provide a number of general suggestions and caveats. In Section
3.1, we proceed to test the method using data from the banking industry in order to
illustrate the practical application of this method.

3.1 Suggestions and caveats


Before testing our method, we provide a number of practical suggestions below:
(1) It is important to ensure an adequate number of scale points, specifically nine or
ten-point Likert-type scales, as each evaluation scale is split into three parts
(i.e. positive ratings, negative ratings, and intermediate cases) during the
analysis. Therefore, in order to ensure statistical variance within these subsets,
at least nine response alternatives (i.e. scale points) are recommended. On the
other hand, no clear-cut evidence exists on how the number of scale points affect
scale reliability (for a review of relevant studies, see Chang, 1994).
(2) It is also necessary to ensure adequate sample size. We suggest applying this
method to large samples, since we need to estimate the impact of negative
attribute performance on dissatisfied customers as well as the impact of
excellent attribute performance on delighted customers. A greater number of
sub-cases will thus improve statistical power. The analysis is based on a
multivariate regression with ordinary least squares and, as a result, traditional
sampling considerations apply in this context: larger sample sizes will improve
statistical power (i.e. the significance test will depend on both the size of the
sample and the size of the effect) and reduce selection bias.
(3) It is advisable to check for multicollinearity problems before starting any
analysis. One measure commonly used for this purpose is the variance inflation
factor (VIF). The rule of thumb is that the VIF will be greater than 4.0 when
multicollinearity is a problem (e.g. Studenmund, 2005) and the estimated
importance measures might be unreliable. In such a case, it is possible to use
procedures such as normalized pairwise estimation or principal components
regression (for a complete review, see Gustafsson and Johnson, 2004).
(4) It is necessary to run the analysis on customers with homogeneous preferences Rejuvenating
instead of averaging out customers with different preferences (Allenby et al., IPA
1998). Analytical rigor would thus favor either preliminary common-sense
segmentation (ex-ante segmentation) or data-driven segmentation (ex-post
cluster analysis or latent class analysis;, e.g. Kamakura and Russell, 1989;
Mizuno et al., 2008).
(5) Finally, it is important to conduct sensitivity analyses regarding the separation 417
levels of the measurement scales. “Good” and “bad” performances as well as
“high” and “low” levels of satisfaction must be defined according to the specific
goals and standards in each situation. What are the corresponding cut-off
ratings on each scale? This is a management choice based on managerial
intuition, contingent evaluations, and strategic objectives. For this reason,
however, we strongly recommend verifying how sensitive the results are to
these choices. To this end, it is possible to run different analyses using different
cut-off values, for example the most and least stringent requirements to define
“high satisfaction” and “low satisfaction” as well as “positive” and “negative”
Journal of Service Management 2011.22:409-429.

attribute performance. As shown by the formula in the previous section, this


can be done quickly and easily by recoding dummies (D1ji and D2ji).

4. Empirical application
4.1 Data collection
In order to illustrate and test the proposed method, we used customer satisfaction data
from a prominent Italian bank. Data was collected through a survey of 5,209 customers
using a computer-assisted telephone interview method. The sampling frame includes
the bank’s customers with either a checking or a savings account at the time of the
survey (March to April 2006). Likert-like scales from 1 (extremely low) to 10 (extremely
high) are used to measure both overall satisfaction and attribute performance.
We first identified a complete set of attributes on the basis of the extant literature
and suggestions from the company’s management. We then generated a subset of
relevant satisfaction drivers through both qualitative and quantitative exploratory
analysis (e.g. Cooley and Lohnes, 1971; Darlington, 1990). Given the very large dataset,
we used a cross-validation approach and ran a stepwise regression procedure (e.g.
Darlington, 1990; Lindeman et al., 1980; Neter et al., 1985; Stevens, 1986). In the next
step, we defined a subset of relevant satisfaction drivers. Finally, we employed VIFs to
verify that the multicollinearity was not a severe problem among those attributes.

4.2 Analysis
In order to provide a meaningful comparison of methods, we analyzed the data using
the proposed method as well as traditional approaches. We first used the traditional
regression analysis approach, which estimates the importance of the attributes
through either standardized or unstandardized regression coefficients (e.g. Taylor,
1997). The dependent variable is “overall customer satisfaction with the bank”.
Tables II and III show the results of the traditional regression analysis.
The estimated model provides very good measures in terms of both statistical
significance and overall fit, explaining almost 60 percent of the variance in customer
satisfaction. Nonetheless, as argued above, the assumptions of linearity in the
traditional model are inconsistent with both theory and empirical evidence.
JOSM We now test the proposed method. In this context, the relationship between
22,3 attribute-level performance and overall satisfaction is more complex and requires
specific adjustments. For each attribute, the analysis introduces four additional
parameters. Consider, for example, the attribute labeled “innovativeness of the firm”.
The first parameter is the coefficient of the dummy variable identifying customers who
were dissatisfied with the bank and provided a negative evaluation of its
418 innovativeness. This coefficient is used to correct the intercept term when depicting
the relationship between low levels of innovativeness and dissatisfaction.
The second parameter is the coefficient of the interaction term that multiplies the
ratings of “innovativeness” by the dummy variable above. By applying this corrective
value to the traditional beta of “innovativeness”, we estimate the impact that
underperforming in this attribute has on customer dissatisfaction. Graphically, this
number represents the slope of the relationship under these circumstances.
The third parameter is the coefficient of the dummy variable identifying customers
who were highly satisfied with the bank and gave a positive evaluation of its
innovativeness. This coefficient is useful for the purpose of representing the
relationship between high levels of innovativeness and high satisfaction. As in the
Journal of Service Management 2011.22:409-429.

previous case, this estimate is used to correct the intercept term when depicting this
relationship.

Parameter Estimated beta Standard error t-value Pr . jtj

Intercept 20.18 0.12 2 1.53 0.1271


Ease of contacting the company 0.06 0.01 4.85 , 0.0001
Provision of suitable investments 0.15 0.01 10.91 , 0.0001
Technical quality of the content received 0.10 0.02 7.07 , 0.0001
Financial results from subscribed products 0.30 0.01 27.59 , 0.0001
Innovativeness of the firm 0.09 0.02 5.44 , 0.0001
Solid company 0.14 0.02 8.59 , 0.0001
Table II. Overall satisfaction with front office employees 0.19 0.01 16.81 , 0.0001
Traditional analysis to
elicit attribute importance Notes: R 2 ¼ 0.58; adjusted R 2 ¼ 0.57

Parameter Mean SD

Overall customer satisfaction 8.0 1.57


Overall satisfaction with front office employees 8.6 1.54
Solid company 8.2 1.48
Innovativeness of the firm 8.2 1.51
Ease of contacting the company 8.5 1.46
Technical quality of the content received 8.1 1.28
Provision of suitable investments 7.9 1.51
Table III. Financial results from subscribed products 7.6 1.73
Descriptive statistics for
the variables in the model Note: Valid cases: n ¼ 5,209
The fourth parameter is the coefficient of the interaction term that multiplies the Rejuvenating
ratings of “innovativeness” by the previous dummy variable. This parameter provides IPA
the final piece of information necessary to derive a more accurate relationship between
high levels of innovativeness and high levels of customer satisfaction. As in the
previous case, this corrective figure is added to the traditional beta for innovativeness.
The results of the new regression analysis are shown in Table IV[2], and their
implications are discussed in the next section. 419
4.3 Classification of attributes and identification of priorities
The last step in the CR-IPA procedure is to represent and compare the relevant
attributes in order to prioritize customer satisfaction efforts. In contrast to previous
methods, the information in Table IV enables us to estimate the relationship between
the performance of an attribute and its impact on overall customer satisfaction, while:
.
Maintaining the advantages of a linear representation, that is, allowing easy
interpretation of the coefficients as well as direct comparison of the different
attributes’ marginal impacts.
Journal of Service Management 2011.22:409-429.

.
Capturing potential variations in the attributes’ impact at different levels of
customer satisfaction, which is consistent with both theory and empirical
evidence. Visually, this is comparable to allowing the traditional straight line
of the relationship to break off at two points, that is, above a certain level of
satisfaction and below a certain level of dissatisfaction (e.g. Figure 2, discussed
below).

In order to illustrate this method more clearly, this section examines and interprets the
results for the attribute labeled “technical quality of the content received from the
bank” (i.e. a monthly newsletter including financial analyses and other relevant
information for clients). We then introduce a final chart to support managerial decision
making.
With a beta coefficient of 0.10 (Table II), the traditional analysis would classify this
attribute as one of the least important in the set. However, by averaging its implicit
importance, traditional analysis actually conceals the exciting nature of this attribute.
Table IV and Figure 2 reveal a different picture. The latter represents the relationship
between this variable and overall customer satisfaction. The solid line is the new
estimate, while the dotted line is the estimate based on the old linear parameter. The lines
are drawn under the assumption that the remaining variables perform at their average
levels, which implies a baseline level of customer satisfaction equal to 8.
The proposed method reveals that the attribute “technical quality of the content
received from this bank” makes a substantial contribution to explaining the top levels
of customer satisfaction. Table IV highlights the exciting nature of this attribute, since
the adjusted beta coefficient estimating the impact of high performance on high
satisfaction is more than twice the adjusted beta coefficient for the impact of low
performance on dissatisfaction (0.21 vs 0.09). The opposite is true for basic factors such
as “being a solid company” (0.18 vs 0.43). Finally, the two coefficients are very similar
in the case of “performance attributes”, as is aptly demonstrated by “overall
satisfaction with front office employees” (0.33 and 0.36).
In order to better understand the role of an attribute in a value proposition, it is
important to:
Journal of Service Management 2011.22:409-429.

22,3

420

insights
JOSM

Table IV.
Derived measures of
attribute importance: new
Impact of high performances on high
Impact of low performances on dissatisfaction Reference cases satisfaction
Beta Intercept Pr . t Beta Intercept
correction Pr . jtj correction Pr . jtj beta (one tail) correction Pr . jtj correction Pr . jtj

Intercept 7.98 ,0.0001


Ease of contacting the company 0.04 0.63 2 0.21 0.55 2 0.06 n.s. 0.10 0.00 20.47 0.05
Provision of suitable investments 0.05 0.47 2 0.74 0.03 0.00 n.s. 0.17 , 0.0001 21.15 , 0.0001
Technical quality of the content
received 0.09 0.28 2 0.71 0.08 2 0.06 n.s. 0.21 , 0.0001 21.49 , 0.0001
Financial results from subscribed
products 0.24 ,0.0001 2 1.87 ,0.0001 0.06 0.00 0.14 , 0.0001 20.80 0.00
Innovativeness of the firm 2 0.04 0.73 2 0.75 0.12 2 0.05 n.s. 0.14 0.00 20.89 0.00
Solid company 0.43 ,0.0001 2 2.30 ,0.0001 2 0.07 n.s. 0.18 , 0.0001 21.07 0.00
Overall satisfaction with front
office employees 0.36 ,0.0001 2 1.95 ,0.0001 2 0.10 n.s. 0.33 , 0.0001 21.90 , 0.0001
2 2
Notes: R ¼ 0.68; adjusted R ¼ 0.68
Impact* of high performances (8 to 10) Rejuvenating
on satisfaction with the bank:
9
+0.20 + 0.21 x IPA
Impact* of low performances (1 to 5)
on dissatisfaction with the bank:
Traditional beta: 0.10
–0.71 + 0.09 x
8
421

Current performance,
as perceived on average: 8.0

Current performance,
as perceived by
dissatisfied customers: 6.8
Journal of Service Management 2011.22:409-429.

Overall satisfaction with the bank

* Impact on average satisfaction

Figure 2.
A non-linear relationship
1 5 10
between attribute
8
performance and overall
Satisfaction with the technical quality of the contents satisfaction
received from the bank

.
examine the impact of both low and high performance on both dissatisfaction
and satisfaction;
.
check the current performance as reported by customers, both on average and
among dissatisfied customers; and
.
compare the information from the two points above across all relevant attributes.

Therefore, it is appropriate to generate two different charts for CR-IPA (Figures 3


and 4), one to help foster customer satisfaction and one to minimize dissatisfaction.
JOSM 0.35
22,3 High priority Satisfaction with
front office employees
0.30
Marginal impact on satisfaction
0.25 Technical quality
Financial results from of the
422 subscribed products content received
0.20
Solid company
Provision of suitable
0.15 investments
Innovativeness of the firm

0.10
Ease of contacting the
company
0.05
Low priority
Journal of Service Management 2011.22:409-429.

Figure 3. 0.00
CR-IPA on satisfaction 7.2 7.4 7.6 7.8 8 8.2 8.4 8.6
Current performance

2.5
High priority
Maximum negative impact on dissatisfaction

Solid company

2 Financial results from


subscribed products Satisfaction with front office
employees

1.5

1 Technical quality
Innovativeness of the firm of the
Provision of suitable content received
investments
0.5
Low priority
Ease of contacting the
company
Figure 4. 0
CR-IPA on dissatisfaction 4 4.5 5 5.5 6 6.5 7
Current performance as perceived by dissatisfied customers

These charts are a variation on the diagonal models already presented in various
studies (e.g. Hawes and Rao, 1985; Slack, 1994), but they now include the implications
of the three-factor theory. For IPA applications, diagonal line models prove to be more
reliable than quadrant methods (Bacon, 2003).
In this analysis, we deliberately avoid using any algebraic formula to rank priorities Rejuvenating
with mathematical precision, as such approaches have a number of theoretical IPA
shortcomings (Bacon, 2003) and represent a risky oversimplification for the company’s
management. Instead, we propose a heuristic interpretation of the two graphs where
the attributes in the upper left-hand corner represent the top priorities (either to
increase customer satisfaction or to reduce dissatisfaction, depending on the graph),
while those in the bottom right-hand corner should be the last investment areas 423
to consider.
We observe that the “technical quality of the content” now proves to be the second
most important factor in explaining high levels of customer satisfaction. If we then
check the bank’s current performance with regard to this factor (as reported by
customers), we can see that it is not a top-rated element in our value proposition, but
only in the middle range in terms of performance. In this sense, the factor reveals an
interesting potential area of improvement: the current ratings are just close to the
average, although this could become a distinctive element of a value proposition to
delight customers. One might suggest investing in this dimension in order to further
improve customer evaluations and, in turn, overall satisfaction.
Journal of Service Management 2011.22:409-429.

The charts also reveal that the resources for such an improvement might come from
reducing the attention currently paid to the attribute “ease of contacting the company”,
which is probably over-performing. Finally, the firm should also examine Figure 4 in
order to evaluate how to reduce the number of dissatisfied customers, since the
“technical quality of the content” would help to delight the customer base, but is not a
main driver of – nor the answer to – customer dissatisfaction.
Another consideration relates to the distinction between customer acquisition and
retention analysis. Since we have no actual data on pre-purchase behavior in this specific
case, we can only rely on common sense in this context. Nonetheless, it seems quite
plausible to assume that when choosing a new banking service provider, people do not
pay particular attention to the kind of newsletters they will receive. They do not gather
information on that topic, nor do they draw comparisons with other competitors on that
basis. In other words, this attribute is likely to be unimportant during the purchasing
process. Accordingly, it should be disregarded in CA-IPA. However, if we used the same
analytical tool for customer retention purposes, we would completely omit what we have
found to be a relevant and exciting factor for customer satisfaction (i.e. retention)
purposes.
Finally, it is important to bear in mind that the insights from the charts must be
integrated and interpreted with due caution and sensitivity by the company’s
management, which should consider, among other things, the marginal costs of
improvements and the different lifetime value of various customers (e.g. Baesens et al.,
2004).

5. Conclusions and limitations


Although IPA has been widely used as a managerial tool, it has significant limitations.
In order to refine the outcomes of IPA, this paper present a possible variation on this
method.
From a more conceptual point of view, we first assert the need for an explicit
distinction between the use of IPA for the purpose of customer acquisition vs customer
retention. These two cases refer to distinct moments in the customer relationship life
JOSM cycle and thus require separate analyses. Specifically, for the purposes of IPA, the
22,3 purchasing decision and the post-purchase evaluation stages diverge because:
.
the list of important attributes may be different (cf. the aforementioned example
of a car);
.
the cognitive processes involved in evaluating expected performance (i.e. ex-ante
424 evaluation) vs evaluating perceived performance (i.e. ex-post evaluation) are
different;
.
the terms of comparison are different, as they refer to the competitors in the
former case and to the customer’s expectations in the latter case; and
.
the firm’s marketing goals are different, as the first case involves a purchasing
choice and the latter refers to customer satisfaction (where a trade-off may
exist).

We have thus proposed a specific method for customer retention IPA. The approach
is illustrated using a dataset consisting of approximately 5,000 customers of a leading
Journal of Service Management 2011.22:409-429.

Italian bank. On the basis of dummy variables and interaction effects, our method
estimates the impact of poor attribute evaluations on low levels of satisfaction with
the bank as well as the impact of excellent evaluations on high levels of satisfaction.
The next step is to compare the marginal impact of each attribute on customer
satisfaction (/dissatisfaction) and to combine this information with current
performance levels. This step yields two charts, one designed to maximize
customer satisfaction and one to minimize customer dissatisfaction. In those charts, it
is possible to imagine iso-priority lines as shown in Figure 5. The management’s
analysis should be complemented by strategic thinking and the following two
considerations:

0.35
y
rit
Marginal impact on satisfaction/dissatisfaction

io
0.30 Pr Soddisfazione per il consulente
gh
Hi
0.25
Risultati conseguiti con i
Qualità tecnica dei contenuti
prodotti
0.20
Società solida
0.15 Offerta di investimenti adeguati

Società innovativa
0.10
Facilità di contatto con la
società ty
0.05 ri
io
Figure 5. Pr
w
CR-IPA to evaluate Lo
0.00
customer satisfaction 7.2 7.4 7.6 7.8 8 8.2 8.4 8.6
priorities
Current performance of the attribute
(1) the marginal costs of improving the attributes’ performance; and Rejuvenating
(2) the different potential lifetime values of different customers (with different IPA
priorities).

Applied to a homogeneous subset of customers, CR-IPA serves to support decisions on


the attributes which should be prioritized in order to increase satisfaction (or to reduce
dissatisfaction). The attributes to prioritize are those which exhibit a higher impact on 425
satisfaction (/dissatisfaction) and lower customer evaluations. In contrast, the firm
could consider freeing up resources with regard to those attributes which have a low
impact on satisfaction (/dissatisfaction) and top evaluations. Once again, the
iso-priority lines are merely a heuristic – yet straightforward – means of allocating
financial resources for customer retention, although customer satisfaction and
retention are not perfectly synonymous. Attribute-level performance may affect
satisfaction and repurchase intentions differently (Ostrom and Iacobucci, 1995; Mittal
et al., 1998). Nonetheless, customer satisfaction – as a comprehensive evaluation of a
consumption experience – remains a prominent predictor of future behavior (Fornell
Journal of Service Management 2011.22:409-429.

et al., 1996). The marketing literature provides extensive evidence that overall
satisfaction is a direct antecedent of loyalty as well as reduced price sensitivity,
increased cross-buying, and positive word of mouth (e.g. Bearden and Teel, 1983;
LaBarbera and Mazursky, 1983; Selnes, 1993; Dick and Basu, 1994).
In this way, customer satisfaction enhances the profitability of the firm (Anderson
et al., 1994; Hallowell, 1996; Johnson et al., 1996; Eklof et al., 1999; Zeithaml, 2000) and
serves as a point of reference for fine-tuning the company’s value development,
production, and delivery processes. As a result, CR-IPA represents a valuable link
between marketing and other business functions designed to improve overall business
performance. CR-IPA shifts the focus of attention from customer satisfaction to specific
product attributes, thus providing insights for R&D, marketing, and production
functions as well. This method decomposes the overall customer experience and
translates it into the operational language of specific business processes to be
strengthened, weakened, or rethought in order to maximize the returns from scarce
resources.
Finally, it is important to note a number of technical issues and limitations
associated with this approach and the empirical test presented in this paper:
.
The method presented in this paper uses derived measures of attribute
importance based on regression coefficients. This strategy is highly effective as
long as severe multicollinearity does not affect the attributes. In the example
presented above, the Pearson correlation coefficients and variance inflation
analysis confirm that no strong relationship exists due to mathematical
processes.
.
Before analysis, the firm’s customer base must be partitioned into meaningful,
homogeneous clusters. This objective requires either cluster analysis or latent
class analysis.
.
While the procedure in this study is applicable to any service business, the paper
does not claim any external validity for the numerical results of the empirical
application. Our objective was merely to demonstrate a revised approach to IPA.
JOSM Notes
22,3 1. For the purpose of operationalizing the dependent variable in a CR-IPA, customer
satisfaction can be used as a typical proxy measure of customer retention capabilities;
however, other measures would also be logically consistent, such as attitudinal loyalty
or the net promoter score indicated by the customer. We used “overall customer satisfaction”
only because it was the variable the company in our example decided to enhance.
Mutatis mutandis, the same considerations apply to other proxy measures for customer
426 retention.
2. It is not the aim of this paper to discuss the specific results and their implications for the
banking industry; those findings are the focus of a separate work. Our goal here is to
illustrate the proposed method using a real-world case.

References
Allenby, G.M., Arora, N. and Ginter, J.L. (1998), “On the heterogeneity of demand”, Journal of
Marketing Research, Vol. 35 No. 3, pp. 384-9.
Anderson, E.W. and Mittal, V. (2000), “Strengthening the satisfaction-profit chain”, Journal of
Journal of Service Management 2011.22:409-429.

Service Research, Vol. 3 No. 2, pp. 107-20.


Anderson, E.W., Fornell, C. and Lehmann, D.R. (1994), “Customer satisfaction, market share &
profitability: findings from Sweden”, Journal of Marketing, Vol. 58 No. 3, pp. 53-66.
Anton, J. (1996), Customer Relationship Management: Making Hard Decisions with Soft Numbers,
Prentice-Hall, Upper Saddle River, NJ.
Backhaus, K. and Bauer, M. (2008), “The impact of critical incidents on customer satisfaction in
business-to-business relationship”, Journal of Business to Business Marketing, Vol. 8 No. 1,
pp. 25-44.
Bacon, D.R. (2003), “A comparison of approaches to importance-performance analysis”,
International Journal of Market Research, Vol. 45 No. 1, pp. 55-71.
Baesens, B., Verstraeten, G., Van den Poel, D., Egmont-Petersen, M., Van Kenhove, P. and
Vanthienen, J. (2004), “Bayesian network classifiers for identifying the slope of the
customer lifecycle of long-life customers”, European Journal of Operational Research,
Vol. 156 No. 2, pp. 508-23.
Beach, L.R. and Burns, L.R. (1995), “The service quality improvement strategy: identifying
priorities for change”, International Journal of Service Industry Management, Vol. 6 No. 5,
pp. 5-15.
Bearden, W.O. and Teel, J.E. (1983), “Selected determinants of consumer satisfaction and
complaint report”, Journal of Marketing Research, Vol. 20 No. 1, pp. 21-8.
Berger, C., Blauth, R. and Boger, D. (1993), “Kano’s methods for understanding customer defined
quality”, The Journal of the Japanese Society for Quality Control, Vol. 2 No. 4, pp. 3-35.
Bitner, M.J. (1990), “Evaluating service encounters: the effects of physical surroundings and
employee responses”, Journal of Marketing, Vol. 54, April, pp. 69-82.
Brandt, R.D. (1987), “A procedure for identifying value-enhancing service components using
customer satisfaction survey data”, in Surprenant, C.F. (Ed.), Add Value to Your Service,
American Marketing Association, Chicago, IL, pp. 61-5.
Brandt, R.D. (1988), “How service marketers can identify value-enhancing service elements”, The
Journal of Services Marketing, Vol. 2 No. 3, pp. 35-41.
Brandt, R.D. (1998), “An ‘outside-in’ approach to defining performance targets for measures of
customer service and satisfaction”, in Edosomwan, J.A. (Ed.), Customer Satisfaction
Management Frontiers II, Quality University Press, Fairfax, VA, pp. 7.1-7.11.
Busacca, B. and Padula, G. (2005), “Understanding the relationship between attribute Rejuvenating
performance and overall satisfaction: theory, measurement and implications”, Marketing
Intelligence & Planning, Vol. 23 No. 6, pp. 543-61. IPA
Cadotte, E.R. and Turgeon, N. (1988), “Key factors in guest satisfaction”, The Cornell Hotel &
Restaurant Administration Quarterly, Vol. 28 No. 4, pp. 45-51.
Chang, L. (1994), “A psychometric evaluation of 4-point and 6-point Likert-type scales in relation
to reliability and validity”, Applied Psychological Measurement, Vol. 18, September, 427
pp. 205-15.
Conklin, M., Powaga, K. and Lipovetsky, S. (2004), “Customer satisfaction analysis: identification
of key drivers”, European Journal of Operational Research, Vol. 154 No. 3, pp. 819-27.
Cooley, W.W. and Lohnes, P.R. (1971), Multivariate Data Analysis, Wiley, New York, NY.
Darlington, R.B. (1990), Regression and Linear Models, McGraw-Hill, New York, NY.
Deng, W., Kuo, Y. and Chen, W. (2008), “Revised importance-performance analysis: three factor
theory and benchmarking”, The Service Industries Journal, Vol. 28 No. 1, pp. 37-51.
Dick, A.S. and Basu, K. (1994), “Customer loyalty: toward an integrated conceptual framework”,
Journal of the Academy of Marketing Science, Vol. 22 No. 2, pp. 99-113.
Journal of Service Management 2011.22:409-429.

Dolinsky, A.L. (1991), “Considering the competition in strategy development: an extension of


importance-performance analysis”, Journal of Health Care Marketing, Vol. 11 No. 1,
pp. 31-6.
Dolinsky, A.L. and Caputo, R.K. (1991), “Adding a competitive dimension to
importance-performance analysis: an application to traditional health care systems”,
Health Marketing Quarterly, Vol. 8 Nos 3/4, pp. 61-79.
Eklof, J.A., Hackl, P. and Westlund, A. (1999), “On measuring interactions between customer
satisfaction and financial results”, Total Quality Management, Vol. 10 Nos 4/5, pp. 514-22.
Fornell, C., Johnson, M.D., Anderson, E.W., Jaesung, C. and Bryant, B.E. (1996), “The American
customer satisfaction index: nature, purpose, and findings”, Journal of Marketing, Vol. 60
No. 4, pp. 7-18.
Gale, B.T. (1994), Managing Customer Value, The Free Press, New York, NY.
Gustafsson, A. and Johnson, M.D. (2004), “Determining attribute importance in a service
satisfaction model”, Journal of Service Research, Vol. 7 No. 2, pp. 124-41.
Hallowell, R. (1996), “The relationship of customer satisfaction, customer loyalty, and
profitability: an empirical study”, International Journal of Service Industry Management,
Vol. 7 No. 4, pp. 27-42.
Hansen, E. and Bush, R.J. (1999), “Understanding customer quality requirements: model and
application”, Industrial Marketing Management, Vol. 28, pp. 119-30.
Hawes, J.M. and Rao, C.P. (1985), “Using importance-performance analysis to develop health care
marketing strategies”, Journal of Health Care Marketing, Vol. 5 No. 4, pp. 19-25.
Johnson, M.D., Nader, G. and Fornell, C. (1996), “Expectations, perceived performance, and
customer satisfaction for a complex service: the case of bank loans”, Journal of Economic
Psychology, Vol. 17 No. 2, pp. 163-82.
Johnston, R. (1995), “The determinants of service quality: satisfiers and dissatisfiers”,
International Journal of Service Industry Management, Vol. 6 No. 5, pp. 53-71.
Kamakura, W.A. and Russell, G.J. (1989), “A probabilistic choice model for market segmentation
and elasticity structure”, Journal of Marketing Research, Vol. 26 No. 4, pp. 379-90.
Kano, N. (1984), “Attractive quality and must-be quality”, Hinshitsu: The Journal of the Japanese
Society for Quality Control, Vol. 14, April, pp. 39-48.
JOSM LaBarbera, P.A. and Mazursky, D. (1983), “A longitudinal assessment of consumer
satisfaction/dissatisfaction: the dynamic aspect of the cognitive process”, Journal of
22,3 Marketing Research, Vol. 20 No. 4, pp. 393-404.
Lindeman, R.H., Merenda, P.F. and Gold, R.Z. (1980), Introduction to Bivariate and Multivariate
Analysis, Scott Foresman, Glenview, IL.
Lowenstein, M.W. (1995), Customer Retention: An Integrated Process for Keeping Your Best
428 Customers, ASQC Quality Press, Milwaukee, WI.
Maddox, R.N. (1981), “Two-factor theory and consumer satisfaction: replication and extension”,
Journal of Consumer Research, Vol. 8 No. 1, pp. 97-102.
Martilla, J.A. and James, J.C. (1977), “Importance-performance analyses”, Journal of Marketing,
Vol. 41 No. 1, pp. 77-9.
Matzler, K. and Sauerwein, E. (2002), “The factor structure of customer satisfaction: an empirical
test of the importance grid and the penalty-reward-contrast analysis”, International
Journal of Service Industry Management, Vol. 13 No. 4, pp. 314-32.
Matzler, K., Hinterhuber, H.H., Bailom, F. and Sauerwein, E. (1996), “How to delight your
customers”, Journal of Product & Brand Management, Vol. 5 No. 2, pp. 6-18.
Journal of Service Management 2011.22:409-429.

Matzler, K., Sauerwein, E. and Heischmidt, K.A. (2003), “Importance-performance analysis


revisited: the role of the factor structure of customer satisfaction”, The Service Industries
Journal, Vol. 23 No. 2, pp. 112-29.
Matzler, K., Bailom, F., Hinterhuber, H.H., Renzl, B. and Pichler, J. (2004), “The asymmetric
relationship between attribute-level performance and overall customer satisfaction: a
reconsideration of the importance-performance analysis”, Industrial Marketing
Management, Vol. 33 No. 4, pp. 271-7.
Matzler, K., Renzl, B. and Rothenberger, S. (2006), “Measuring the relative importance of service
dimensions in the formation of price satisfaction and service satisfaction”, Scandinavian
Journal of Hospitality and Tourism, Vol. 6 No. 3, pp. 179-96.
Mazur, G. (1998), “QFD for service industries”, in ReVell, J.V., Moran, J.W. and Cox, C.A. (Eds),
The QFD Handbook, Wiley, New York, NY, pp. 139-62.
Mittal, V. and Katrichis, J.K. (2000), “Distinctions between new and loyal customers”, Marketing
Research, Vol. 12 No. 1, pp. 26-32.
Mittal, V., Ross, W.T. and Baldasare, P.M. (1998), “The asymmetric impact of negative and
positive attribute-level performance on overall satisfaction and repurchase intentions”,
Journal of Marketing, Vol. 62 No. 1, pp. 33-47.
Mizuno, M., Saji, A., Sumita, U. and Suzuki, H. (2008), “Optimal threshold analysis of
segmentation methods for identifying target customers”, European Journal of Operational
Research, Vol. 186 No. 1, pp. 358-79.
Myers, J. (2001), Measuring Customer Satisfaction: Hot Buttons and Other Measurement Issues,
American Marketing Association, Chicago, IL.
Neter, J., Wasserman, W. and Kutner, M.H. (1985), Applied Linear Statistical Models: Regression,
Analysis of Variance, and Experimental Designs, Irwin, Homewood, IL.
Oliver, R.L. (1980), “A cognitive model of the antecedents and consequences of satisfaction
decisions”, Journal of Marketing Research, Vol. 14 No. 4, pp. 495-507.
Oliver, R.L. (1997), Customer Satisfaction: A Behavioral Perspective on the Consumer,
McGraw-Hill, New York, NY.
Ostrom, A. and Iacobucci, D. (1995), “Customer trade-offs and the evaluation of services”, Journal
of Marketing, Vol. 59 No. 1, pp. 17-28.
Selnes, F. (1993), “An examination of the effect of product performance on brand reputation, Rejuvenating
satisfaction, and loyalty”, European Journal of Marketing, Vol. 27 No. 9, pp. 19-35.
IPA
Silvestro, R. and Johnston, R. (1990), “The determinants of service quality: hygiene and
enhancing factors”, Quality Services II, Selected Papers, Warwick Business School,
Coventry, pp. 193-210.
Slack, N. (1994), “The importance-performance matrix as a determinant of improvement
priority”, International Journal of Operations & Production Management, Vol. 14 No. 5, 429
pp. 59-75.
Stauss, B. and Hentschel, B. (1992), “Attribute-based versus incident-based measurement
of service quality: results of an empirical study in the German car service industry”, in
Kunst, P. and Lemmink, J. (Eds), Quality Management in Services, Van Gorcum,
Assen/Maastricht, NL, pp. 59-78.
Stevens, J. (1986), Applied Multivariate Statistics for Social Sciences, LEA, Hillsdale, NJ.
Studenmund, A.H. (2005), Using Econometrics: A Practical Guide, 5th ed., Addison-Wesley,
Reading, MA.
Swan, J.E. and Combs, L.J. (1976), “Product performance and consumer satisfaction: a new
Journal of Service Management 2011.22:409-429.

concept”, Journal of Marketing, Vol. 40 No. 2, pp. 25-33.


Taylor, S.A. (1997), “Assessing regression-based importance weights for quality perceptions and
satisfaction judgments in the presence of higher order and/or interaction effects”, Journal
of Retailing, Vol. 73 No. 1, pp. 135-59.
Vavra, T.G. (1997), Improving Your Measurement of Customer Satisfaction, ASQ Quality Press,
Milwaukee, WI.
Wittink, D.R. and Bayer, L.R. (1994), “The measurement imperative”, Marketing Research, Vol. 6
No. 4, pp. 14-22.
Yang, C. (2005), “The refined Kano’s model and its application”, Total Quality Management,
Vol. 16 No. 10, pp. 1127-37.
Zeithaml, V.A. (2000), “Service quality, profitability, and the economic worth of customers: what
we know & what we need to learn”, Journal of the Academy of Marketing Science, Vol. 28
No. 1, pp. 67-85.

Further reading
Matzler, K., Fuchs, M. and Schubert, A. (2004), “Employee satisfaction: does Kano’s model
apply?”, Total Quality Management, Vol. 15 Nos 9/10, pp. 1179-98.
O’Neill, M.A. and Palmer, A. (2004), “Importance-performance analysis: a useful tool for directing
continuous quality improvement in higher education”, Quality Assurance in Education,
Vol. 12 No. 1, pp. 39-52.
Sampson, S.E. and Showalter, M.J. (1999), “The performance-importance response function:
observations and implications”, The Service Industries Journal, Vol. 19 No. 3, pp. 1-25.
Tikkanen, H., Alajoutsijarvi, K. and Tahtinen, J. (2000), “The concept of satisfaction in industrial
markets: a contextual perspective and a case study from the software industry”, Industrial
Marketing Management, Vol. 29 No. 4, pp. 373-86.
Yavas, U. and Shemwell, D.J. (1997), “Analyzing a bank’s competitive position & appropriate
strategy”, Journal of Retail Banking Services, Vol. 19 No. 4, pp. 43-51.
Zhang, H.Q. and Chow, I. (2004), “Application of importance-performance model in tour guides’
performance: evidence from mainland Chinese outbound visitors in Hong Kong”, Tourism
Management, Vol. 25 No. 1, pp. 81-91.
JOSM About the authors
Alessandro Arbore is Professor of Marketing at SDA Bocconi School of Management
22,3 and Assistant Professor of Management and Business Administration at Bocconi
University. He is a Researcher at the Customer and Service Science Lab, Bocconi University.
Alessandro Arbore is the corresponding author and can be contacted at: alessandro.
arbore@sdabocconi.it
Bruno Busacca is Full Professor of Management and Business Administration at Bocconi
430 University and is Director of the Masters Division at SDA Bocconi School of Management.
Journal of Service Management 2011.22:409-429.

To purchase reprints of this article please e-mail: reprints@emeraldinsight.com


Or visit our web site for further details: www.emeraldinsight.com/reprints
This article has been cited by:

1. Xia Wang, Xiang (Robert) Li, Feng Zhen, JinHe Zhang. 2016. How smart is your tourist attraction?:
Measuring tourist preferences of smart tourism attractions via a FCEM-AHP and IPA approach. Tourism
Management 54, 309-320. [CrossRef]
2. Olimpia I. Ban, Adrian I. Ban, Delia A. Tuşe. 2016. Importance–Performance Analysis by Fuzzy C-Means
Algorithm. Expert Systems with Applications 50, 9-16. [CrossRef]
3. Jiangbin Yin, Xinyu (Jason) Cao, Xiaoyan Huang, Xiaoshu Cao. 2016. Applying the IPA–Kano model to
examine environmental correlates of residential satisfaction: A case study of Xi'an. Habitat International
53, 461-472. [CrossRef]
4. Ivan Ka Wai Lai, Michael Hitchcock. 2015. Importance–performance analysis in tourism: A framework
for researchers. Tourism Management 48, 242-267. [CrossRef]
5. Li-Fei Chen. 2015. Exploring asymmetric effects of attribute performance on customer satisfaction using
association rule method. International Journal of Hospitality Management 47, 54-64. [CrossRef]
6. Mengying Feng, John Mangan, Chee Wong, Maozeng Xu, Chandra Lalwani. 2014. Investigating the
different approaches to importance–performance analysis. The Service Industries Journal 34, 1021-1041.
Journal of Service Management 2011.22:409-429.

[CrossRef]
7. Xiaojing Sheng, Penny M. Simpson, Judy A. Siguaw. 2014. U. S. winter migrants' park community
attributes: An importance–performance analysis. Tourism Management 43, 55-67. [CrossRef]
8. Seyed Yaghoub Hosseini, Alireza Ziaei Bideh. 2014. A data mining approach for segmentation-
based importance-performance analysis (SOM–BPNN–IPA): a new framework for developing customer
retention strategies. Service Business 8, 295-312. [CrossRef]
9. Li-Fei Chen. 2014. A novel framework for customer-driven service strategies: A case study of a restaurant
chain. Tourism Management 41, 119-128. [CrossRef]
10. Gerson Tontini, Jaime Dagostin Picolo. 2013. Identifying the impact of incremental innovations on
customer satisfaction using a fusion method between importance-performance analysis and Kano model.
International Journal of Quality & Reliability Management 31:1, 32-52. [Abstract] [Full Text] [PDF]
11. Lisa Slevitch, Kimberly Mathe, Elena Karpova, Sheila Scott‐Halsell. 2013. “Green” attributes and
customer satisfaction. International Journal of Contemporary Hospitality Management 25:6, 802-822.
[Abstract] [Full Text] [PDF]
12. Tobias Krämer, Matthias H. J. Gouthier, Karsten Wulf. 2013. Organisationsstolz im Customer Care
Center. Marketing Review St. Gallen 30, 34-43. [CrossRef]

You might also like