Professional Documents
Culture Documents
Johnson LeBretonORM2004
Johnson LeBretonORM2004
net/publication/215519832
CITATIONS READS
883 22,234
2 authors:
All content following this page was uploaded by Jeff Johnson on 24 July 2014.
JEFF W. JOHNSON
Personnel Decisions Research Institutes
JAMES M. LEBRETON
Wayne State University
The search for a meaningful index of the relative importance of predictors in multi-
ple regression has been going on for years. This type of index is often desired when
the explanatory aspects of regression analysis are of interest. The authors define
2
relative importance as the proportionate contribution each predictor makes to R ,
considering both the unique contribution of each predictor by itself and its incre-
mental contribution when combined with the other predictors. The purposes of
this article are to introduce the concept of relative importance to an audience of
researchers in organizational behavior and industrial/organizational psychology
and to update previous reviews of relative importance indices. To this end, the au-
thors briefly review the history of research on predictor importance in multiple re-
gression and evaluate alternative measures of relative importance. Dominance
analysis and relative weights appear to be the most successful measures of relative
importance currently available. The authors conclude by discussing how impor-
tance indices can be used in organizational research.
Multiple regression analysis has two distinct applications: prediction and explanation
(Courville & Thompson, 2001). When multiple regression is used for a purely predic-
tive purpose, a regression equation is derived within a sample to predict scores on a cri-
terion variable from scores on a set of predictor variables. This equation can be applied
to predictor scores within a similar sample to make predictions of the unknown crite-
rion scores in that sample. The elements of the equation are regression coefficients,
which indicate the amount by which the criterion score would be expected to increase
as the result of a unit increase in a given predictor score, with no change in any of the
other predictor scores. The extent to which the criterion can be predicted by the predic-
Authors’ Note: Correspondence concerning this article should be addressed to Jeff W. Johnson, Per-
sonnel Decisions Research Institutes, 43 Main Street SE, Suite 405, Minneapolis, MN 55414; e-mail: jeff.
johnson@pdri.com.
Organizational Research Methods, Vol. 7 No. 3, July 2004 238-257
DOI: 10.1177/1094428104266510
© 2004 Sage Publications
238
Johnson, LeBreton / HISTORY AND USE OF RELATIVE IMPORTANCE 239
tor variables (indicated by R2) is of much greater interest than is the relative magnitude
of the regression coefficients.
The other use of multiple regression is for explanatory or theory-testing purposes.
In this case, we are interested in the extent to which each variable contributes to the
prediction of the criterion. For example, we may have a theory that suggests that one
variable is relatively more important than another. Interpretation is the primary con-
cern, such that substantive conclusions can be drawn regarding one predictor with
respect to another. Although there are many possible definitions of importance (Bring,
1994; Kruskal & Majors, 1989), this is what is typically meant by the relative impor-
tance of predictors in multiple regression.
Achen (1982) discussed three different meanings of variable importance. Theoreti-
cal importance refers to the change in the criterion based on a given change in the pre-
dictor variable, which can be measured using the regression coefficient. Level impor-
tance refers to the increase in the mean criterion score that is contributed by the
predictor, which corresponds to the product of a variable’s mean and its unstandard-
ized regression coefficient. This is a popular measure in economics (Kruskal &
Majors, 1989). Finally, dispersion importance refers to the amount of the criterion
variance explained by the regression equation that is attributable to each predictor
variable. This is the interpretation of importance that most often corresponds to mea-
sures of importance in the behavioral sciences, when the explanatory aspects of
regression analysis are of interest (Thomas & Decady, 1999).
To draw conclusions about the relative importance of predictors, researchers often
examine the regression coefficients or the zero-order correlations with the criterion.
When predictors are uncorrelated, zero-order correlations and standardized regres-
sion coefficients are equivalent. The squares of these indices sum to R2, so the relative
importance of each variable can be expressed as the proportion of predictable variance
for which it accounts. When predictor variables are correlated, however, these indices
have long been considered inadequate (Budescu, 1993; Green & Tull, 1975; Hoffman,
1960). In the presence of multicollinearity, squared correlations and squared standard-
ized regression coefficients are no longer equivalent, do not sum to R2, and take on very
different meanings. Correlations represent the unique contribution of each predictor
by itself, whereas regression coefficients represent the incremental contribution of
each predictor when combined with all remaining predictors.
To illustrate the concept of relative importance and the inadequacy of these indices
for reflecting it, consider an example from a situation in which a relative importance
index is frequently of interest. Imagine a customer satisfaction survey given to bank
customers, and the researcher is interested in determining how each specific aspect of
bank satisfaction contributes to customers’ overall satisfaction judgments. In other
words, what is the relative importance bank customers place on teller service, loan
officer service, phone representative service, the convenience of the hours, and the
interest rates offered in determining their overall satisfaction with the bank? Regres-
sion coefficients are inadequate because customers do not consider the incremental
amount of satisfaction they derive from each bank aspect while holding the others con-
stant. Zero-order correlations are also inadequate because customers do not consider
each bank aspect independent of the others. Rather, they consider all the aspects that
are important to them simultaneously and implicitly weight each aspect relative to the
others in determining their overall satisfaction.
240 ORGANIZATIONAL RESEARCH METHODS
Because neither index alone tells the full story of a predictor’s importance,
Courville and Thompson (2001) recommended that both regression coefficients and
correlations (or the equivalent structure coefficients) be examined when interpreting
relative importance (see also Thompson & Borrello, 1985). Examining two different
indices to try to determine an ordering of relative importance is highly subjective,
which is why the search for a single meaningful index of relative importance has been
going on for years. Although a number of different definitions have been offered over
the years, we offer the following definition of relative importance:
ing both its direct effect (i.e., its correlation with the criterion) and its effect when com-
bined with the other variables in the regression equation.
Single-Analysis Methods
model R when the predictors are uncorrelated, but Darlington (1990) argued that
2
importance is proportional to rxy, not rxy2. In fact, whether rxy or rxy2 more appropriately
represents importance depends on how importance is defined. If importance is defined
as the amount by which a unit increase in the predictor increases the criterion score,
Johnson, LeBreton / HISTORY AND USE OF RELATIVE IMPORTANCE 243
low correlations with the other two predictors, the third predictor will have the largest
usefulness simply because of its lack of association with the other predictors.
Bring (1996) noted that variables that are uncorrelated with the criterion but add to
the predictive value of the model (i.e., suppressor variables; Cohen & Cohen, 1983)
have no importance according to the product measure. He described this result as
counterintuitive. Thomas et al. (1998) argued that suppressor variables should be
treated differently from nonsuppressors and that the contribution of the suppressors
should be assessed separately by measuring their contribution to R2. Treating sup-
pressors and nonsuppressors separately, however, ignores the fact that the two types of
variables are complexly intertwined. The importance of the nonsuppressors depends
on the presence of the suppressors in the model, so treating them separately may also
be considered counterintuitive.
By the same token, variables that have meaningful correlations with the criterion
but do not add to the predictive value of the model also have no importance under the
product measure. This is counter to our definition of relative importance, which sug-
gests that a measure of importance should consider both the effect a predictor has in
isolation from the other predictors (i.e., the predictor-criterion correlation) and in con-
junction with the other predictors (i.e., the beta weight). The product measure essen-
tially ignores the magnitude of one of its components if the magnitude of the other
component is very low.
Johnson, LeBreton / HISTORY AND USE OF RELATIVE IMPORTANCE 245
Another aspect of the product measure that limits its utility considerably is the fact
that negative importance values are possible and not unusual. Pratt (1987) stated that
both βx and rxy must be of the same sign for this measure to be a valid measure of impor-
tance. Thomas et al. (1998), however, argued that negative importance occurs only
under conditions of high multicollinearity. They note that negative importance “of
‘large’magnitude can occur only if the variance inflation factor (a standard measure of
multicollinearity) for the j’th variable is large” (p. 264). The variance inflation factor
(VIF) is given by VIFj = 1/(1 – R2( j)), where R2( j) is the squared multiple correlation
from the regression of variable xj on the remaining x’s. Removing the variable(s) with
the large VIF would eliminate redundancy in the model and leave only positive impor-
tance values. It is relatively easy, however, to identify situations in which a variable has
negative importance even when multicollinearity is low. For example, consider a sce-
nario in which five predictors are equally intercorrelated at .30 and have the following
criterion correlations:
ryx1 = .40
ryx 2 = .50
ryx 3 = .20
ryx 4 = .40
ryx 5 = .50.
In this case, the beta weight for x3 is –.104, so β x ryx 3 = –.021. Although the extent
to which this could be considered of “large” magnitude is debatable, it is a result that is
very difficult to interpret. Because the predictors are all equally intercorrelated, each
predictor has the same VIF. There is no a priori reason for excluding x3 based on multi-
collinearity, so the only reason for excluding it is because the importance weight does
not make sense. An appropriate measure of predictor importance should be able to
provide an interpretable importance weight for all variables in the model and should be
able to do this regardless of the extent to which the variables are intercorrelated. Pratt
(1987) showed that the product measure is not arbitrary, but we believe it still leaves
much to be desired as a measure of predictor importance.
Multiple-Analysis Methods
to R across all possible orderings. This index has several desirable properties, includ-
2
ing (a) the sum of the average squared semipartial correlations across all predictors is
equal to R , (b) any predictor that is positively related to the criterion will receive a pos-
2
itive importance weight, and (c) the definition of importance is intuitively meaningful.
Lindeman et al.’s (1980) average increase in R . Theil (1987) and Theil and Chung
2
(1988) built on Kruskal’s (1987) approach by suggesting using a function from statis-
tical information theory to transform the average partial correlations to average bits of
information provided by each variable. This does allow an additive decomposition of
the total information, but it does little to add to the understanding of the measure for the
typical user.
Dominance analysis. The approach taken by Budescu (1993) differs from previous
approaches in that it is not assumed that all variables can be ordered in terms of impor-
tance. His dominance analysis is a method of determining whether predictor variables
can be ranked. In other words, Budescu contended that there may well be situations in
which it is impossible to determine an ordering, so a dominance analysis should be
undertaken prior to any quantitative analysis. For any two predictor variables, xi and xj,
let xh stand for any subset of the remaining p – 2 predictors in the set. Variable xi domi-
nates variable xj if, and only if,
for all possible choices of xh. This can also be stated as xi dominates xj if adding xi to
each of the possible subset models always results in a greater increase in R2 than would
be obtained by adding xj. If the predictive ability of one variable does not exceed that of
another in all subset regressions, a dominance relationship cannot be established and
the variables cannot be rank ordered meaningfully (Budescu, 1993).
The idea behind dominance analysis is attractive, but Budescu’s (1993) definition
of importance is very strict. Consequently, it is usually not possible to order all predic-
tor variables when there are more than a few predictors in the model. Recently, how-
ever, this strict definition of dominance has been relaxed somewhat (Azen & Budescu,
2003). Azen and Budescu (2003) defined three levels of dominance: (a) complete, (b)
conditional, and (c) general. Complete dominance corresponds to the original defini-
tion of dominance. Conditional dominance occurs when the average additional contri-
bution within each model consisting of the same number of variables is greater for one
predictor than for another. General dominance occurs when the average additional
contribution across all models is greater for one predictor than for another.
The general dominance measure is the same as the quantitative measure Budescu
(1993) suggested be computed if all p(p – 1)/2 pairs of predictors can be ordered (i.e.,
the average increase in R2 associated with a predictor across all possible submodels:
C x j ). The C x j ’s sum to the model R2, so the relative importance of each predictor can
be expressed as the proportion of predictable criterion variance accounted for by that
Johnson, LeBreton / HISTORY AND USE OF RELATIVE IMPORTANCE 247
Criticality. Azen, Budescu, and Reiser (2001) proposed a new approach to compar-
ing predictors in multiple regression, which they termed predictor criticality. Tradi-
tional measures of importance assume that the given model is the best-fitting model,
whereas criticality analysis does not depend on the choice of a particular model. A pre-
dictor’s criticality is defined as the probability that it is included in the best-fitting
model given an initial set of predictors. The first step in determining predictor critical-
ity is to bootstrap (i.e., resample with replacement; Efron, 1979) a large number of
samples from the original data set. Within each bootstrap sample, evaluate all (2p – 1)
submodels according to some criterion (e.g., adjusted R ). Each predictor’s criticality
2
is determined as the proportion of the time that the predictor was included in the best-
fitting model across all bootstrap samples. Criticality analysis has the advantage of not
requiring the assumption of a single best-fitting model, and it has a clear definition.
Research comparing predictor criticality to various measures of predictor importance
should be conducted to gain an understanding of how the two concepts are related.
Criticality analysis requires even more computational effort than dominance analysis
does, however, because 2p – 1 submodels must be computed within each of 100 or
more bootstrap samples. This severely limits the applicability of criticality analysis to
situations in which only a few predictors are evaluated.
as possible to the original set of predictors but are uncorrelated with each other. The
criterion can then be regressed on the new orthogonal variables, and the squared stan-
dardized regression coefficients approximate the relative importance of the original
predictors. This approach has a certain appeal because relative importance is unam-
biguous when variables are uncorrelated, and the orthogonal variables can be very
highly related to the original predictors. The obvious problem with this approach is
that the orthogonal variables are only approximations of the original predictors and
may not be close representations if two or more original predictors are highly
correlated.
Green, Carroll, and DeSarbo’s (1978) δ2. Green et al. (1978) realized the limita-
tions of inferring importance from orthogonal variables that may not be highly related
to the original predictors and suggested a method by which the orthogonal variables
could be related back to the original predictors to better estimate their relative impor-
tance. In their procedure, the orthogonal variables are regressed on the original predic-
tors. Then the squared regression weights of the original predictors for predicting each
orthogonal variable are converted to relative contributions by dividing them by the
sum of the squared regression weights for each orthogonal variable. These relative
contributions are then multiplied by the corresponding squared regression weight of
each orthogonal variable for predicting the criterion and summed across orthogonal
variables to arrive at the importance weight, called δ2. The sum of the δ2’s is equal to R2.
Green et al. (1978) showed that this procedure yields more intuitive importance
weights under high multicollinearity than do the Gibson (1962) and R. M. Johnson
(1966) methods. It has the further advantages of allowing importance to be assigned to
the original predictors and being much simpler computationally than dominance
analysis. It has a very serious shortcoming, however, in that the regression weights
obtained by regressing the orthogonal variables on the original predictors are still
coefficients from regressions on correlated variables (Jackson, 1980). The weights
obtained by regressing the orthogonal variables on the original predictors to determine
the relative contribution of each original predictor to each orthogonal variable are just
as ambiguous in terms of importance as regression weights obtained by a regression of
the dependent variable on the original variables. Green, Carroll, and DeSarbo (1980)
acknowledged this criticism but could respond only that their measure was at least
better than previous methods of allocating importance. Boya and Cramer (1980) also
pointed out that this method is not invariant to orthogonalizing procedures. In other
words, if an orthogonalizing procedure other than the one suggested by Gibson (1962)
and R. M. Johnson (1966) were used (e.g., principal components), the procedure
would not yield the same importance weights.
variables rather than to the correlated original predictors, the problem of correlated
predictors is not reintroduced with this method. Johnson termed the weights resulting
from the combination of the two sets of squared regression coefficients epsilons (ε).
They have been more commonly referred to as relative weights (e.g., J. W. Johnson,
2001a), which is consistent with the original use of the term used by Hoffman (1960,
1962).
A graphic representation of J. W. Johnson’s (2000a) relative weights is presented in
Figure 1. In this three-variable example, the original predictors (Xj) are transformed to
their maximally related orthogonal counterparts (Zk), which are then used to predict
the criterion (Y). The regression coefficients of Y on Zk are represented by βk, and the
regression coefficients of Xj on Zk are represented by λjk. Because the Zk’s are
uncorrelated, the regression coefficients of Xj on Zk are equal to the correlations
between Xj and Zk. Thus, each squared λjk represents the proportion of variance in Zk
accounted for by Xj (J. W. Johnson, 2000a). To compute the relative weight for Xj, mul-
tiply the proportion of variance in each Zk accounted for by Xj by the proportion of vari-
ance in Y accounted for by each Zk and sum the products. For example, the relative
weight for X1 would be calculated as
Epsilon is an attractive index that has a simple logic behind its development. The
relative importance of the Zk’s to Y, represented by βk2, is unambiguous because the Zk’s
are uncorrelated. The relative contribution of Xj to each Zk, represented by λjk2, is also
unambiguous because the Zk’s are determined entirely by the Xj’s, and the λjk’s are re-
gression coefficients on uncorrelated variables. The λjk2’s sum to 1, so each represents
the proportion of βk2 that is attributable to Xj. Multiplying these terms ( λ 2jk β 2k ) yields
the proportion of variance in Y that is associated with Xj through its relationship with
Zk, and summing across all Zk’s yields the total proportion of variance in Y that is asso-
ciated with Xj.
Relative weights have an advantage over dominance analysis in that they can be
easily and quickly computed with any number of predictors. As noted earlier, domi-
nance analysis requires considerable computational effort that typically limits the
number of predictors to 10 or fewer. A possible criticism is that Boya and Cramer’s
(1980) point about Green et al.’s (1978) measure not being invariant to orthogonal-
izing procedures also applies to relative weights.
λ11
X1 Z1
λ12
λ13
β1
λ21
λ22 β2
X2 Z2 Y
λ23 β3
λ31
λ32
λ33
X3 Z3
Table 1
Example Correlation Matrix and Relative Importance Weights
Calculated by Different Methods
Variable 1 2 3 4 5
Job satisfaction .250 .151 .195 .163 .163 64.6 54.0 54.1
Leader communication .160 .040 .080 .075 .075 26.5 24.8 24.9
Participative leadership .123 .005 .025 .045 .044 8.5 14.8 14.5
Worker motivation .063 .000 .001 .020 .020 0.5 6.5 6.5
Sum .301 .301 .301 100.0 100.0 100.0
done to illustrate the advantages of the newer statistics, even when the rank orders
were identical.
Importance indexed via squared correlations indicated that all four predictors were
important. Examination of the magnitude of the estimates indicated that job satisfac-
tion was approximately twice as important as leader communication and participative
leadership, and job satisfaction was approximately 4 times as important as worker
motivation. However, the conclusions drawn using squared beta weights and the prod-
uct measure were radically different. Using squared betas, only job satisfaction and
Johnson, LeBreton / HISTORY AND USE OF RELATIVE IMPORTANCE 251
Summary
Considering all the relative importance indices just reviewed, we suggest that the
preferred methods among those currently available are Budescu’s (1993) dominance
analysis and J. W. Johnson’s (2000a) relative weights. These indices do not have logi-
cal flaws in their development that make it impossible to consider them as reasonable
measures of predictor importance. Both methods yield importance weights that repre-
sent the proportionate contribution each predictor makes to R2, and both consider a
predictor’s direct effect and its effect when combined with other predictors. Also, they
yield estimates of importance that make conceptual sense. This is of course highly
subjective, but it is relatively easy to eliminate other indices from consideration based
solely on this criterion.
Both indices yield remarkably similar results when applied to the same data. J. W.
Johnson (2000a) computed relative weights and the quantitative dominance analysis
measure in 31 different data sets. Each index was converted to a percentage of R2, and
the mean absolute deviation between importance indices computed using the two
methods was only 0.56%. The fact that these two indices, which are based on very dif-
ferent approaches to determining predictor importance, yield results that differ only
trivially provides some impressive convergent validity evidence that they are measur-
ing the same construct. Either index can therefore be considered equally appropriate as
252 ORGANIZATIONAL RESEARCH METHODS
rater and the ratee interact to influence the relative importance of performance on spe-
cific dimensions to overall evaluations. Using relative weights, she found that male
and female raters had similar perceptions of what is important to advancement
potential when the ratee was male but that perceptions differed when the ratee was
female.
Some studies have investigated differences in relative importance across cultures.
Using relative weights, J. W. Johnson and Olson (1996) found that the relative impor-
tance of individual supervisor attributes to overall performance was related to differ-
ences between countries in Hofstede’s (1980) cultural value dimensions of power dis-
tance, uncertainty avoidance, individualism, and masculinity. Similarly, Robie,
Johnson, Nilsen, and Hazucha (2001) used relative weights to examine differences
between countries in the relative importance of 24 performance dimensions to rat-
ings of overall performance. Suh, Diener, Oishi, and Triandis (1998) used dominance
analysis to investigate the relative importance of emotions, cultural norms, and extra-
version in predicting life satisfaction across cultures.
There are many ways to apply relative importance indices when analyzing survey
data. Relative importance analysis can reveal the specific areas that contribute the most
to employee or customer satisfaction, which helps decision makers set priorities for
where to apply scarce organizational resources (Lundby & Fenlason, 2000; Whanger,
2002). It can also shorten surveys by eliminating the need for direct ratings of impor-
tance. Lundby and Fenlason (2000) compared relative weights to direct ratings of
importance and employee comments. An examination of employee comments sup-
ported the notion that relative weights better reflected the importance employees
placed on different issues because a greater proportion of written comments were
devoted to those issues that received higher relative weights. Most direct ratings of
importance tend to cluster around the high end of the scale, with very little variability.
Especially with employee opinion surveys, respondents would be likely to rate every
issue as being important for fear that anything that is not given high importance ratings
will be taken away from them. Relative weights allow decision makers to allocate
scarce resources to the issues that are actually most highly related to respondent
satisfaction.
There are, of course, many other substantive questions that can be addressed by rel-
ative importance methods across a broad spectrum of organizational research
domains. Some examples include employee selection (e.g., which exercises in an
assessment center are most important for predicting criteria such as job performance,
salary, and promotion?), training evaluation (e.g., what are the most important predic-
tors of successful transfer of training?), culture and climate (e.g., how important are
the various dimensions of culture and climate in predicting organizationally valued
criteria such as job satisfaction, turnover, organizational commitment, job perfor-
mance, withdrawal cognitions, and/or perceived organizational support?), and leader
effectiveness (e.g., which dimensions of transactional and transformational leadership
are most predictive of subordinate ratings of leader effectiveness or overall firm
effectiveness?).
Conclusion
We believe that research on relative importance methods is still in its infancy but
has progressed tremendously in recent years. Additional work is needed on refining
254 ORGANIZATIONAL RESEARCH METHODS
References
Achen, C. H. (1982). Interpreting and using regression. Beverly Hills, CA: Sage.
Azen, R., & Budescu, D. V. (2003). The dominance analysis approach for comparing predictors
in multiple regression. Psychological Methods, 8, 129-148.
Azen, R., Budescu, D. V., & Reiser, B. (2001). Criticality of predictors in multiple regression.
British Journal of Mathematical and Statistical Psychology, 54, 201-225.
Baltes, B. B., Parker, C. P., Young, L. M., Huff, J. W., & Altmann, R. (2004). The practical utility
of importance measures in assessing the relative importance of work-related perceptions
and organizational characteristics on work-related outcomes. Organizational Research
Methods, 7, 326-340.
Behson, S. J. (2002). Which dominates? The relative importance of work-family organizational
support and general organizational context on employee outcomes. Journal of Vocational
Behavior, 61, 53-72.
Bing, M. N. (1999). Hypercompetitiveness in academia: Achieving criterion-related validity
from item context specificity. Journal of Personality Assessment, 73, 80-99.
Borman, W. C., White, L. A., & Dorsey, D. W. (1995). Effects of ratee task performance and in-
terpersonal factors on supervisor and peer performance ratings. Journal of Applied Psy-
chology, 80, 168-177.
Boya, Ö. Ü., & Cramer, E. M. (1980). Some problems in measures of predictor variable impor-
tance in multiple regression. Unpublished manuscript, University of North Carolina at
Chapel Hill.
Bring, J. (1994). How to standardize regression coefficients. American Statistician, 48, 209-
213.
Bring, J. (1996). A geometric approach to compare variables in a regression model. American
Statistician, 50, 57-62.
Budescu, D. V. (1993). Dominance analysis: A new approach to the problem of relative impor-
tance of predictors in multiple regression. Psychological Bulletin, 114, 542-551.
Cochran, C. C. (1999). Gender influences on the process and outcomes of rating performance.
Unpublished doctoral dissertation, University of Minnesota, Minneapolis.
Cohen, J., & Cohen, P. (1983). Applied multiple regression/correlation analysis for the behavior
sciences (2nd ed.). Hillsdale, NJ: Lawrence Erlbaum.
Courville, T., & Thompson, B. (2001). Use of structure coefficients in published multiple re-
gression articles: β is not enough. Educational and Psychological Measurement, 61, 229-
248.
Darlington, R. B. (1968). Multiple regression in psychological research and practice. Psycho-
logical Bulletin, 69, 161-182.
Darlington, R. B. (1990). Regression and linear models. New York: McGraw-Hill.
Dunn, W. S., Mount, M. K., Barrick, M. R., & Ones, D. S. (1995). Relative importance of per-
sonality and general mental ability in managers’ judgments of applicant qualifications.
Journal of Applied Psychology, 80, 500-509.
Eby, L. T., Adams, D. M., Russell, J. E. A., & Gaby, S. H. (2000). Perceptions of organizational
readiness for change: Factors related to employees’ reactions to the implementation of
team-based selling. Human Relations, 53, 419-442.
Johnson, LeBreton / HISTORY AND USE OF RELATIVE IMPORTANCE 255
Efron, B. (1979). Bootstrap methods: Another look at the jackknife. Annals of Statistics, 7, 1-26.
Englehart, M. D. (1936). The technique of path coefficients. Psychometrika, 1, 287-293.
Gibson, W. A. (1962). Orthogonal predictors: A possible resolution of the Hoffman-Ward con-
troversy. Psychological Reports, 11, 32-34.
Goldberger, A. S. (1964). Econometric theory. New York: John Wiley & Sons.
Green, P. E., Carroll, J. D., & DeSarbo, W. S. (1978). A new measure of predictor variable im-
portance in multiple regression. Journal of Marketing Research, 15, 356-360.
Green, P. E., Carroll, J. D., & DeSarbo, W. S. (1980). Reply to “A comment on a new measure of
predictor variable importance in multiple regression.” Journal of Marketing Research, 17,
116-118.
Green, P. E., & Tull, D. S. (1975). Research for marketing decisions (3rd ed.). Englewood Cliffs,
NJ: Prentice Hall.
Grisaffe, D. (1993, February). Appropriate use of regression in customer satisfaction analyses:
A response to William McLauchlan. Quirk’s Marketing Research Review, 11-17.
Healy, M. J. R. (1990). Measuring importance. Statistics in Medicine, 9, 633-637.
Heeler, R. M., Okechuku, C., & Reid, S. (1979). Attribute importance: Contrasting measure-
ments. Journal of Marketing Research, 16, 60-63.
Hobson, C. J., & Gibson, F. W. (1983). Policy capturing as an approach to understanding and im-
proving performance appraisal: A review of the literature. Academy of Management Re-
view, 8, 640-649.
Hobson, C. J., Mendel, R. M., & Gibson, F. W. (1981). Clarifying performance appraisal crite-
ria. Organizational Behavior and Human Performance, 28, 164-188.
Hoffman, P. J. (1960). The paramorphic representation of clinical judgment. Psychological Bul-
letin, 57, 116-131.
Hoffman, P. J. (1962). Assessment of the independent contributions of predictors. Psychologi-
cal Bulletin, 59, 77-80.
Hofstede, G. (1980). Culture’s consequences: International differences in work-related values.
Beverly Hills, CA: Sage.
Jaccard, J., Brinberg, D., & Ackerman, L. J. (1986). Assessing attribute importance: A compari-
son of six methods. Journal of Consumer Research, 12, 463-468.
Jackson, B. B. (1980). Comment on “A new measure of predictor variable importance in multi-
ple regression.” Journal of Marketing Research, 17, 113-115.
James, L. R. (1998). Measurement of personality via conditional reasoning. Organizational Re-
search Methods, 1, 131-163.
Johnson, J. W. (2000a). A heuristic method for estimating the relative weight of predictor vari-
ables in multiple regression. Multivariate Behavioral Research, 35, 1-19.
Johnson, J. W. (2000b, April). Practical applications of relative importance methodology in I/O
psychology. Symposium presented at the 15th annual conference of the Society for Indus-
trial and Organizational Psychology, New Orleans, LA.
Johnson, J. W. (2001a). Determining the relative importance of predictors in multiple regres-
sion: Practical applications of relative weights. In F. Columbus (Ed.), Advances in psychol-
ogy research (Vol. 5, pp. 231-251). Huntington, NY: Nova Science.
Johnson, J. W. (2001b). The relative importance of task and contextual performance dimensions
to supervisor judgments of overall performance. Journal of Applied Psychology, 86, 984-
996.
Johnson, J. W., & Johnson, K. M. (2001, April). Rater perspective differences in perceptions of
executive performance. In M. Rotundo (Chair), Task, citizenship, and counterproductive
performance: The determination of organizational decisions. Symposium conducted at the
16th annual conference of the Society for Industrial and Organizational Psychology, San
Diego, CA.
Johnson, J. W., & Olson, A. M. (1996, April). Cross-national differences in perceptions of su-
pervisor performance. In D. Ones & C. Viswesvaran (Chairs), Frontiers of international I/O
256 ORGANIZATIONAL RESEARCH METHODS
Theil, H. (1987). How many bits of information does an independent variable yield in a multiple
regression? Statistics and Probability Letters, 6, 107-108.
Theil, H., & Chung, C. F. (1988). Information-theoretic measures of fit for univariate and
multivariate linear regressions. American Statistician, 42, 249-252.
Thomas, D. R., & Decady, Y. J. (1999). Point and interval estimates of the relative importance of
variables in multiple linear regression. Unpublished manuscript, Carleton University,
Ottawa, Canada.
Thomas, D. R., Hughes, E., & Zumbo, B. D. (1998). On variable importance in linear regression.
Social Indicators Research, 45, 253-275.
Thompson, B., & Borrello, G. M. (1985). The importance of structure coefficients in regression
research. Educational and Psychological Measurement, 45, 203-209.
Ward, J. H. (1962). Comments on “The paramorphic representation of clinical judgment.” Psy-
chological Bulletin, 59, 74-76.
Ward, J. H. (1969). Partitioning of variance and contribution or importance of a variable: A visit
to a graduate seminar. American Educational Research Journal, 6, 467-474.
Whanger, J. C. (2002, April). The application of multiple regression dominance analysis to or-
ganizational behavior variables. In J. M. LeBreton & J. W. Johnson (Chairs), Application of
relative importance methodologies to organizational research. Symposium presented at
the 17th annual conference of the Society for Industrial and Organizational Psychology,
Toronto, Canada.
Zedeck, S., & Kafry, D. (1977). Capturing rater policies for processing evaluation data. Organi-
zational Behavior and Human Performance, 18, 269-294.
Jeff W. Johnson is a senior staff scientist at Personnel Decisions Research Institutes (PDRI). He received his
Ph.D. in industrial/organizational psychology from the University of Minnesota. He has directed and
carried out many applied organizational research projects for a variety of government and private-
sector clients, with a particular emphasis on the development and validation of personnel assessment and
selection systems for a variety of jobs. His primary research interests are in the areas of personnel selection,
performance measurement, research methods, and statistics.
James M. LeBreton is an assistant professor of psychology at Wayne State University in Detroit, Michigan.
He received his Ph.D. in industrial and organizational psychology with a minor in statistics from the Univer-
sity of Tennessee. He also received his B.S. in psychology and his M.S. in industrial and organizational psy-
chology from Illinois State University. His research focuses on application of social cognition to personality
theory and assessment, applied psychometrics, and the application and development of new research meth-
ods and statistics to personnel selection and work motivation.