Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/220014626

Personality Testing and Industrial-Organizational Psychology: A


Productive Exchange and Some Future Directions

Article  in  Industrial and Organizational Psychology · September 2008


DOI: 10.1111/j.1754-9434.2008.00057.x

CITATIONS READS

10 556

2 authors, including:

Frederick L Oswald
Rice University
185 PUBLICATIONS   10,648 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Relative Importance View project

Impact of team configuration and team stability on primary care quality View project

All content following this page was uploaded by Frederick L Oswald on 24 June 2020.

The user has requested enhancement of the downloaded file.


Industrial and Organizational Psychology, 1 (2008), 323–332.
Copyright ª 2008 Society for Industrial and Organizational Psychology. 1754-9426/08

RESPONSE

Personality Testing and Industrial–


Organizational Psychology: A Productive
Exchange and Some Future Directions

FREDERICK L. OSWALD
Michigan State University
LEAETTA M. HOUGH
The Dunnette Group, Ltd.

Abstract
The goal of our focal article was to provide a current perspective on personality testing and its use in organizational
research and to elicit constructive discussion and suggestions for future research and practice. The present article caps off
the discussion by integrating the main ideas presented in the commentaries within our original framework of questions
and topics, with the immodest hope of advancing our understanding of personality and its measurement in the context of
industrial–organizational psychology. In short, we recommend continuing to take advantage of the organizing frame-
work of the Big Five while also pursuing more ‘‘bottom-up’’ approaches that examine facet-level relationships with
multidimensional performance outcomes, in addition to developing process models that include more proximal moti-
vational and situational variables. Work along these lines is valuable to both organizational science and practice.

In our focal article (Hough & Oswald, 2008), ing and contrasting the differing and some-
our goal was to provide a current perspective times opposing points of view.
on personality testing and its use in organi-
zational research and to elicit constructive
Multidimensional Models of Job
discussion and suggestions for future
Performance, Personality
research and practice. We are thrilled with
Measurement, and Taxonomic Issues
the wide range of insightful commentary that
our article inspired, and more generally that We noted in our focal article that (a) multi-
this new Society for Industrial and Organi- dimensional models of job performance had
zational Psychology’s journal is creating become more refined over the past 2 de-
a useful and exciting intellectual exchange. cades, in particular incorporating more
Below, we have integrated the ideas pre- personality-relevant performance constructs;
sented in the commentaries into our original (b) such refinement should enable us to focus
framework of questions and topics, compar- more precisely on those personality con-
structs that enhance our understanding of
the relationships between personality varia-
bles and performance; (c) as a taxonomic
Correspondence concerning this article should be
addressed to Fred Oswald, E-mail: foswald@rice.edu structure, each factor of the five-factor model
Address: Rice University, Department of Psychol- of personality is usually too heterogeneous,
ogy, 6100 Main Street, MS 205, Houston, TX 77007 a facet-level approach to validity is a
Frederick L. Oswald, Department of Psychology,
Michigan State University; Leaetta M. Hough, The Dun- ‘‘bottom-up’’ approach that can tell us em-
nette Group, Ltd. pirically whether it is more useful than the

323
324 F.L. Oswald and L.M. Hough

five-factor model in building our science gesting, for instance, that traits be conceptu-
and practice; (d) additional theory-based alized and measured on the independent
and process-based variables need to be incor- variable side so narrowly that personality
porated into our models relating personality items essentially repeat the same question
to performance; and (e) we should build just to raise the alpha level or that job perfor-
detailed cumulative databases that contain mance on the dependent variable side be
information about personality and criterion broken down into constituent parts that are
relationships, systematically organized by microscopic or mechanistic. Furthermore,
measurement method, situation, and other we agree with Stewart that personality traits
known moderator variables. are especially predictive in the sort of open
Stewart (2008) suggests that to focus on social and teamwork environments that he
narrower personality constructs, as we have describes; however, we argue that an empir-
encouraged, is to sacrifice or at least turn ical understanding that facets are driving
a blind eye to the validity that has been found prediction by the Big Five can be illuminat-
for broader personality constructs predicting ing. We are not suggesting an infinite regress
broad performance behaviors. We agree that of ‘‘going narrow.’’ Conceptually and prac-
from a practical perspective, broad personal- tically we, like other researchers (e.g.,
ity traits can be predictive of similarly broad Paunonen, Rothstein, & Jackson, 1999;
work behaviors (Hough & Ones, 2001). In Schneider, Hough, & Dunnette, 1999; van
fact, the more complex the performance cri- Iddekinge, Taylor, & Eidson, 2005), think
terion is, the more multifaceted the person- that a bottom-up approach to understanding
ality predictor may have to be (Hogan & personality at the facet-level complements
Roberts, 1996; Hough & Ones, 2001). The the Big Five top-down approach and may
prediction afforded by broad personality help us understand how and when personal-
constructs, however, does not preclude the ity is predictive of performance outcomes,
need for additional research on relationships such as for the traits of Conscientiousness
between personality facets and performance (Roberts, Bogg, Walton, Chernyshenko, &
dimensions. Do all facets of Conscientious- Stark, 2004; Roberts, Chernyshenko, Stark,
ness contribute to the prediction of absentee- & Goldberg, 2005) and Openness to Experi-
ism or are only some of them responsible? Do ence (Chernyshenko, Stark, Woo, & Conz,
different facets of Conscientiousness predict 2008). As Barrett (2008) notes, facets can
attention to detail for detail-oriented jobs? have low correlations between one another
and Do facets of Conscientiousness help within the same Big Five construct and
us better understand the mediating and hence are empirically distinguishable, leav-
moderating variables that qualify or explain ing the door open to finding useful patterns
Conscientiousness–performance relation- of differential validity (see also Hough,
ships? Research involving facet-level per- 1992). We clearly agree with Barrett that
sonality measures and multidimensional Conscientiousness is too broad and hetero-
performance outcomes helps address ques- geneous of a factor to consider it as a unitary
tions like these, and it also helps to make very and consistently valid predictor across all
general statements about the Big Five factors occupations and criteria. The broad con-
predicting job performance more useful struct of Conscientiousness is not highly pre-
and precise. In other words, both practice dictive of creative outcomes, for example
and theory can improve when we understand (Feist, 1998; Hough, 1992; Hough &
better exactly how and when broad traits are Dilchert, 2007; Hough & Furnham, 2003),
predictive. Personality-based job analysis and the Conscientiousness facets of depend-
has led to developing performance criteria ability and achievement have demonstrated
that are predicted better by facets than by differential prediction for performance out-
broader constructs (Jenkins & Griffith, 2004). comes at both the individual and team
But neither Stewart nor you, gentle levels (LePine, 2003; LePine, Colquitt, &
reader, should mistake us: We are not sug- Erez, 2000). Other differential relationships
Personality testing and industrial–organizational psychology: Response 325

between personality facets and performance have scales. Current progress has instead
were discussed in our focal article. been made through several major meta-
Although arguing for more bottom-up re- analyses of different personality scales,
search on personality, we also acknowledge applied in organizational contexts, that have
that for practical and/or theoretical reasons, provided our field with knowledge that gen-
it is likely that some personality constructs eralizes. There is a fair middle ground
are more profitably explored at a facet level between broad meta-analyses and scale-
than are others. For instance, a meta-analysis specific analyses that is consistent with Bar-
has indicated that some facets of Conscien- rett’s concerns: developing databases and
tiousness (e.g., achievement, dependability meta-analyses of more refined personality
and order) are found to provide incremental and performance constructs, such as facets
validity for some combinations of job types of the Big Five and types of counterproduc-
and performance criterion but not for others tive work behavior (see Hough & Ones,
(Dudley, Orvis, Liebecki, & Cortina, 2006). 2001, for a nomological-web clustering
Similarly, the facets of Extraversion (e.g., approach).
dominance, sociability, and energy level) Johnson (2008) takes this more refined
have shown differential relationships with approach when he reviews and integrates
criteria (Hough, 1992; Hough, Ones, & theoretical models that involve substantive
Viswesvaran, 1998). Facets of Agreeable- moderators and mediators that explain vari-
ness, on the other hand, do not appear to ance in and across personality–performance
exhibit such differential relationships. relationships. The Johnson and Hezlett
Hence, broad Agreeableness may be useful (2008) model that he describes is a demon-
for predicting the social aspects of work, but stration of the complexity of factors and var-
facets of Extraversion may be more useful to iables that determine performance. Clearly,
this end. It is also worth considering that the world of work is not bivariate, and their
methods other than self-report for collecting model confirms our original comment that
personality data may lead to different pat- the ‘‘complexity of the nomological nets of
terns of empirical distinctiveness and incre- personality constructs is enormous.’’ They
mental validity at the facet level. are, thankfully, much more specific than
Barrett is skeptical of meta-analytic we were: Their model contains variables
results on personality–performance relation- related to motivation that, are relatively dis-
ships, for one because by contrast, practi- tal (e.g., work attitudes) and proximal (e.g.,
tioners want to obtain the best validity self-regulation) with respect to job perfor-
coefficients for particular measures and in mance outcomes. Models like these are
a particular occupational setting. It is true important because they suggest that moti-
that different personality scales do comprise vational variables more proximal to perfor-
different sets of items, and therefore some mance are meaningful (see also Humphreys
organizations may be especially interested & Revelle, 1984; Kanfer, 1990) and also
in the specific measures they use. But from because they make explicit the fact that not
a theoretical standpoint, these items and all relevant variables in a performance
scales are indicators of constructs, and model may be available in the practice of
meta-analyses that have organized personal- personnel selection. For instance, Johnson’s
ity scales at the construct and facet levels of model indicates that on-the-job stress will
the Big Five have provided theoretically use- show negative correlations with job perfor-
ful and important information concerning mance, but in a selection setting, a more dis-
criterion-related validity. A measure-specific tal measure of general stress tolerance may
approach can also be useful, but if followed have to suffice for understanding that partic-
exclusively, it could severely hinder our field ular relationship in a job applicant sample.
as a science. We do not want to return to the We concur with Stewart that we also need
‘‘good old daze’’ (Hough, 1997) of the past good models of the determinants of team
with as many personality constructs as we performance, and models of individual
326 F.L. Oswald and L.M. Hough

performance are not enough in many cases. factor model of personality (e.g., Paunonen,
We reiterate our point that if we understand 1998; Paunonen & Ashton, 2001; Roberts
the nomological net of narrower (e.g., Big et al., 2005). Personality constructs mea-
Five facet-level) variables, we are more sured at this level will become increasingly
likely to combine these facets more strategi- important in models, measures, and meta-
cally and efficiently into broader variables analyses of individual- and team-level
that are useful for both individual- and job performance.
team-level performance models, thereby
advancing both our science and practice.
Importance of Situations
Composition models suggest that such vari-
ables may differ for individual- and team- In our focal article, we asserted that (a) sit-
level performance models (Chan, 1998), uations often moderate or mediate relation-
but at each level, they are composed of ships between personality constructs and
well-understood facet-level personality var- work performance at both the individual
iables. We are suggesting that measures of and the team levels, (b) more attention to
facet-level constructs remain intact; they the situation is needed in the measurement
are simply configured differently at the of personality constructs, (c) more attention
broader level, depending upon factors such to the situation is needed in our models of
as the level of analysis and the dependent performance, and (d) our science and prac-
variables figuring into the performance tice would be aided by the development of
model of interest. a taxonomy of situations.
Regarding measurement in personality, Christiansen and Tett (2008) cogently
multiple items (or other measures) that sam- detail the need to examine the effects of
ple content representatively from the rele- situational characteristics on the expression
vant construct domain are required for item of personality-relevant behavior because
covariances to build up geometrically and research to date has found moderate and het-
overwhelm the idiosyncrasies of item- erogeneous levels of validity in personality–
specific variance. The heterogeneity of con- performance relationships. Situational
tent sampled by items tends toward lowering characteristics, if they are investigated,
alpha reliability, but sampling a large often lack the detail that the authors
number of such items tends to raise it (Little, would recommend. From a psychological
Lindenberger, & Nesselroade, 1999). Under- point of view, it may be profitable to mea-
standing the nature of content heterogeneity sure situations more directly in terms of
is critical when sampling items to form personality, in terms of how much choice
a measure of a broad construct, such as behavior is allowable, how much respon-
a Big Five personality construct, and to us sibility is provided, and so on. Personality-
that often means understanding the reliabil- relevant performance data are often too
ity and validity of its facets. This same argu- coarse (e.g., a cursory supervisory perfor-
ment applies to defining a facet carefully mance appraisal); better data may come
such that content sampling is appropriate from multiple-perspective feedback instru-
at the facet level as well (see Comrey, ments that provide different types of data
1988; Hogan, 1983). (e.g., data on teamwork/helping behavior
We believe a consensus is emerging: As from peer ratings; data on customer inter-
we refine our conceptualization and mea- actions from the customer and employee
surement of the dependent variables of criti- ratings themselves). Also the O*NET may
cal importance to organizations, the science provide a good start, as the authors note.
and practice of industrial–organizational Hierarchical linear models (HLM) in-
psychology will benefit from conceptualizing corporate and test relationships at the
and measuring most personality variables at individual level, situational level, and cross-
a commensurate level not too far off from the level interactions: Personality-relevant situ-
level of specificity of the facets of the five- ational characteristics—at the job, social,
Personality testing and industrial–organizational psychology: Response 327

and organizational levels—can be mode- validity that cannot be appropriately


led to predict differences in personality– accounted for statistically by SDq or any
performance relationships across jobs. correction factor; they must be examined
Strong and weak situations (of various types) substantively by conducting research that
exert range-restricting and range-enhancing examines situations and the narrower facet
effects (see Ackerman, Kanfer, & Goff, 1995; level at which the personality measures
Beaty, Cleveland, & Murphy, 2001; Klehe & were derived. Barrett also cites differences
Anderson, 2007; Ployhart, Lim, & Chan, in job types, concurrent versus predictive
2001), and HLM can inform how those effects validities, and faking effects across studies
lead to differences in mean performance and as other sources of heterogeneity underly-
criterion-validity across jobs or departments ing meta-analysis; the nature of these differ-
and over time (see Voelkle, Wittmann, & ences and other important ones (e.g.,
Ackerman, 2006, for a longitudinal example). leadership style; type of performance crite-
To argue for the importance of the situa- rion) also must be researched directly and
tion, Christiansen and Tett focus on the SDq not be guided by the statistical estimate of
estimates from meta-analysis of personality– SDq associated with an overall meta-ana-
performance relationships as evidence of lytic personality–performance correlation.
heterogeneity that mitigates overall meta- In general, we strongly encourage studies
analytic findings. We did not focus on these and meta-analyses of important moderators
estimates by intent, not by accident: Simula- of personality–performance relationships
tion research has found SDq estimates to be no matter what the value of SDq is estimated
highly inaccurate in terms of generating both to be. Heggestad and Gordon (2008) cite
Type I and Type II errors, leading to false several recent articles that report increased
conclusions about validity generalization validity for personality measures that are
or a lack thereof (see Cornwell & Ladd, contextualized to focus on the workplace.
1993; Kemery, Mossholder, & Dunlap, White, Young, Hunter, and Rumsey (2008),
1989; Oswald & Johnson, 1998). This re- however, come to a different conclusion
search is relevant for all meta-analyses not with regard to research adopting a greater
just those in the personality domain. A brief focus on context. They point out that in their
story provides a second reason not to focus large sample military studies, items that are
on SDq: when the first author was teething less content relevant (less contextualized)
on articles in graduate school, he read the have greater criterion-related validity in
now-famous Barrick and Mount (1991) longitudinal studies and less validity in con-
meta-analysis of the Big Five. Being curi- current-related validity studies. Their find-
ous, he requested and received from ings serve as an important caveat in the use
Barrick a list of the personality measures of personality measures in personnel selec-
assigned to Big Five constructs, and what tion, and they suggest that items that are less
was discovered supports the point of obvious about what characteristic is being
Barrett’s: The personality measures were measured retain their validity in real-life
quite diverse in nature, and several of the applicant settings. A similar argument has
measures were narrower than the broad Big been made for conditional reasoning mea-
Five constructs to which they were assigned sures of personality (James, 1998).
(for instance, validity involving a measure The Stewart discussion of team tasks
of dependability may have been assigned involving team-member interdependency
to the umbrella construct of Conscienti- also addresses the situation as a moderator
ousness). Thus, although meta-analytically of the relationships between personality
averaged validities organized into Big Five variables and criteria. We would add that
categories are eminently useful for summa- a team task requiring a creative solution
rizing research, the aforementioned con- versus a routine solution is another exam-
struct deficiencies can lead to systematic ple of a ‘‘situational’’ moderator where
increases, decreases, or heterogeneity in personality variables will show different
328 F.L. Oswald and L.M. Hough

patterns of prediction (Hough, 1992; Hough Measurement Methods and Faking


& Dilchert, 2007).
We are heartened that researchers are cur-
rently developing and investigating alterna-
Personality Variables and
tive strategies to typical self-report measures
Incremental Validity
using a Likert-type or yes/no format. The
Barrett appears to question whether person- work of White et al. has provided significant
ality variables will increase the accuracy of insights and advances in our knowledge
prediction of work performance when com- about the measurement of personality con-
bined with measures of other characteristics structs and ways to address problems with
such as cognitive ability. Empirical research self-report measures. Their commentary is
from Project A (see White et al.) involving a must-read for academics and practitioners
thousands of soldiers found that personality alike.
variables often provide a modest but impor- White et al. reflect on the contributions of
tant increment to validity when combined Project A and related military research that
with measures of cognitive ability, with no has followed. The Army’s Assessment of
reason to suggest that such findings would Individual Motivation (AIM) led the resur-
not exist in the civilian population as well. gence of examining forced-choice personal-
In fact, dozens of other empirical studies ity measures in the attempt to reduce faking
have found similar results in nonmilitary set- in high-stakes personality testing. In a similar
tings. Second, what Barrett refers to as sim- vein, the U.S. Navy has recently developed
ulation studies of the increment in validity a set of unidimensional forced-choice mea-
resulting from the combination of per- sures called the Navy Computer Adaptive
sonality and cognitive variables are, in fact, Personality System (NCAPS; see Houston,
mathematical calculations based on meta- Borman, Farmer, & Bearden, 2006).
analytic estimates, which in turn are not sim- Although forced-choice measures induce
ulated but instead are based on cumulative negative intercorrelations induced by the
research evidence. We do not pretend that dependence between item responses
these are perfect or universal estimates, as (Hicks, 1970), the computerized nature of
we have previously noted, but they are infor- forced-choice measures provides several
mative. Similarly, all the other commenta- potential advantages. First, randomized item
tors to our focal article are clearly pairings help increase test security. Second,
persuaded by the accumulated evidence the adaptive nature of the measure presents
that personality variables offer the possibility items to the test-taker based on previous
of increasing the overall accuracy of pre- responses, thereby allowing fewer items to
diction of work performance—in general be administered while maintaining high
and when combined with measures of cog- reliability. Third, statistical modeling and
nitive ability. To be fair with regard to this scoring tools that account for the aforemen-
issue of incremental validity, we note that tioned dependencies in these types of data
empirical comparisons of broad versus nar- are available, though large sample sizes (at
row personality constructs in regression least 450) are required for stable estimation
models need to account for the fact that (Stark & Drasgow, 2002; also see Cheung,
models with facets will contain a greater 2004; Maydeu-Olivares & Böckenholt,
number of variables and thus have greater 2005). As White et al. note, forced-choice
chance of capitalizing on chance. Therefore, measures appear to show promise by retain-
model results comparing broad versus nar- ing higher levels of validity in high-stakes
row personality variables should be cross situations that tend to reduce criterion-
validated or made comparable in other related validity for Likert-scale personality
appropriate ways (Paunonen & Ashton, measures (see also Jackson, Wrobleski, &
2001; Tett, Steele, & Beauregard, 2003). Ashton, 2000).
Personality testing and industrial–organizational psychology: Response 329

Faking on Likert-scale measures contin- and specifically, in predicting work


ues to receive research attention. White performance.
et al. found that items conceptually linked In taking a more proactive approach to
to job performance, but low in job content controlling for faking, three critical areas
(low context), result in predictive validity for research come to mind: having test-takers
coefficients in high-stakes testing situations understand the personality test within the
that are similar to concurrent validity coeffi- larger context of person–job fit, other tests
cients, perhaps because the low job content in the test battery, and an organization’s per-
is less prone to faking. Kuncel and Borneman sonnel selection process; providing warn-
(2007) used idiosyncratic item responses to ings not to fake (or ‘‘positive warnings’’ to
detect deliberately faked personality inven- respond honestly); and examining test for-
tories. Their method successfully classified mats such as the forced-choice formats pre-
about one-fifth to one-third of intentionally viously mentioned. Converse et al. (2008),
distorted responses to personality items with have examined the latter two factors, noting
only a 1% false-positive rate in a sample of that despite the potential advantages of these
approximately 50% honest respondents. testing interventions, it may come at the cost
Griffith and Peterson (2008) provide a of an increase in negative test-taker reactions.
thoughtful and detailed analysis comparing Certainly, more research in all three of these
social desirability measures with the faking areas would be viewed as more productive
behavior on personality tests that they are than continuing individual-differences work
intended to predict. As they note, more pro- on social desirability measurement or exper-
active approaches to personality testing imental work in instructed faking. We add
would appear to be more effective in reduc- that future research would benefit from a crit-
ing faking than the reactive approach of ical review and framework to classify the key
attempting to detect faking after it occurred. characteristics that distinguish high-stakes
Regarding the latter, we are not alone in testing situations that are likely to induce or
believing that social desirability scales to reduce test faking.
detect faking are problematic, for reasons
described in our focal article. The Paulhus
Legal Issues
Balanced Inventory of Desirable Respond-
ing (Paulhus, 1998) is often used as a measure Jones and Arnold (2008) remind us about the
self-deception and impression management possibility that, in practice, personality test-
dimensions of social desirability. After re- ing might be significantly restricted—and
viewing the research since his influential even prohibited—under state and federal
article on social desirability (Paulhus, law. They provide important cautions and
1984), Paulhus himself concluded (in Paul- information about how to avoid organiza-
hus & John, 1998) that self-deception and tional exposure to risk when incorporating
impression management are more usefully personality tests into selection systems,
considered within a broader framework of including the need to remain vigilant about
self-perception than instantiated as mea- marketing claims.
sures to control for faking personality mea- Barrett also addresses legal issues in his
sures. Specifically, self-deception is related commentary, more specifically arguments
to an egoistic tendency to exaggerate one’s made by plaintiffs’ and defendants’ experts.
ability or status and is correlated with According to Barrett, plaintiff’s experts often
Extraversion, Openness, and Neuroticism; argue, without providing empirical evi-
impression management is related to a mor- dence, that a self-report personality test in
alistic tendency to exaggerate one’s good- combination with a cognitive ability test will
ness as a member of society and is reduce adverse impact against protected
correlated with Agreeableness and Con- classes. A significant amount of such evi-
scientiousness. These so-called biasing dence already exists in the literature to allow
mechanisms may be adaptive, in general for some safe bets about the likely adverse
330 F.L. Oswald and L.M. Hough

impact of a scale when used operationally, Five remains a useful organizing framework,
and White et al. provide empirical evidence that facet-level personality research should
with sample sizes in the thousands. Hough, become very productive these days, given
Oswald, and Ployhart (2001) summarized that the job performance domain on the
mean score differences between Whites dependent variable side of the equation has
and protected classes and men and women become more refined and multidimensional
at broadly and more narrowly defined con- than ever before. Moreover, process models
struct levels. They concluded: that incorporate motivational mediators and
situational moderators will also benefit
Research clearly indicates that the set- greatly from empirical and theoretical work
ting, the sample, the construct and the taking a facet-level approach to personality.
level of construct specificity can all, Situational characteristics in such models
either individually or in combination, deserve to be measured in their own right
moderate the magnitude of differences rather than by proxy, so that situational mod-
between groups. Employers using tests erating effects are modeled appropriately.
in employment settings need to assess Furthermore, process models will be en-
accurately the requirements of work. hanced by incorporating the dimension of
When the exact nature of work is speci- time, and continued use of longitudinal
fied, the appropriate predictors may or analytic methods will inform how and
may not have adverse impact against when personality prediction unfolds over
some groups. (p. 152) the course of time (see Chan & Schmitt,
2000; Schmitt, Oswald, Friede, Imus, &
Adverse impact concerns are yet another Merritt, 2008).
reason for the importance of accumulat- Our maturing development of personality
ing information about mean scores and theory and measurement in organizational
criterion-related validities at a narrower con- research cannot be blind to the real-world
struct level than the factors in the five factor context in which measurement occurs. The
model of personality. Subgroup mean differ- high-stakes context of personnel selection
ences across facets have implications for the means that test faking will continue to be
facets one might choose to combine when a topic worthy of pursuit, and we noted par-
creating a predictor composite. ticularly productive and unproductive direc-
Barrett commented that the U.S. Depart- tions for future research. The legal context
ment of Justice (DOJ) has a history of advo- also must not be ignored, and the pressure
cating for the sole use of personality tests as to ensure the job relevance of personality
an ‘‘alternative’’ to cognitive ability tests for measures administered as part of a selection
the selection of safety forces. DOJ’s actual battery of tests will continue to mount.
history is to argue that personality in combi- We want to close with our appreciation
nation with cognitive ability is a preferable for the opportunity to participate with our
‘‘alternative’’ to the use of cognitive ability commentators in the scholarly interchange
tests alone, although the court never ruled of this new journal format, and we hope to
on the ‘‘alternatives’’ part of the City of have done no less than to inform and inspire
Garland, TX, case to which Barrett refers. those involved in organizational research
and practice in personality testing, at least
by some small amount.
Conclusions
This exchange on the role of personality test-
ing in organizational research has stimulated
References
our own thinking and hopefully yours as
well. We have taken the position that, Ackerman, P. L., Kanfer, R., & Goff, M. (1995). Cognitive
and noncognitive determinants and consequences
although meta-analyses of the Big Five have of complex skill acquisition. Journal of Experimental
yielded useful empirical results, and the Big Psychology: Applied, 1, 270–304.
Personality testing and industrial–organizational psychology: Response 331

Barrett, G. V. (2008). Practitioner’s view of personality Hicks, L. E. (1970). Some properties of ipsative, norma-
testing and industrial–organizational psychology: tive, and forced-choice normative measures. Psy-
Practical and legal issues. Industrial and Organiza- chological Bulletin, 74, 167–184.
tional Psychology: Perspectives on Science and Hogan, J., & Roberts, B. W. (1996). Issues and non-issues
Practice, 1, 299–302. in the fidelity-bandwidth trade-off. Journal of Orga-
Barrick, M. R., & Mount, M. K. (1991). The big five per- nizational Behavior, 17, 627–637.
sonality dimensions and job performance: A meta- Hogan, R. T. (1983). A socioanalytic theory of person-
analysis. Personnel Psychology, 44, 1–26. ality. In M. Page (Ed.), Nebraska symposium on
Beaty, J. C., Cleveland, J. N., & Murphy, K. R. (2001). The motivation (pp. 55–89). Lincoln: University of
relation between personality and contextual perfor- Nebraska Press.
mance in ‘‘strong’’ versus ‘‘weak’’ situations. Human Hough, L. M. (1992). The ‘‘Big Five’’ personality
Performance, 14, 125–148. variables—construct confusion: Description versus
Chan, D. (1998). Functional relations among constructs prediction. Human Performance, 5, 139–155.
in the same content domain at different levels of Hough, L. M. (1997). The millennium for personality
analysis: A typology of composition models. Journal psychology: New horizons or good old daze.
of Applied Psychology, 83, 234–246. Applied Psychology: An International Review, 47,
Chan, D., & Schmitt, N. (2000). Interindividual differ- 233–261.
ences in intraindividual changes in proactivity dur- Hough, L. M., & Dilchert, S. (2007, October). Inventors,
ing organizational entry: A latent growth modeling innovators, and their leaders: Selecting for consci-
approach to understanding newcomer adaptation. entiousness will keep you ‘‘inside the box.’’ Invited
Journal of Applied Psychology, 85, 190–210. presentation, SIOP Leading Edge Consortium:
Chernyshenko, O. S., Stark, S., Woo, S. E., & Conz, G. Enabling Innovation in Organizations. Kansas City,
(2008, April). Openness to experience: Its facet MO.
structure, measurement, and validity. Paper pre- Hough, L. M., & Furnham, A. (2003). Importance and
sented at the 23rd Annual Conference of the Society use of personality variables in work settings. In I. B.
for Industrial and Organizational Psychology (SIOP), Weiner (Ed.-in-Chief) & W. Borman, D. Ilgen, &
San Francisco, CA. R. Klimoski (Vol. Eds.), Handbook of Psychology:
Cheung, M. W. L. (2004). A direct estimation method on Industrial and Organizational Psychology (Vol. 12,
analyzing ipsative data with Chan and Bentler’s pp. 131–169). New York: Wiley.
(1993) method. Structural Equation Modeling, 11, Hough, L. M., & Ones, D. S. (2001). The structure, mea-
217–243. surement, validity, and use of personality variables
Christiansen, N. D., & Tett, R. P. (2008). Toward a better in industrial, work, and organizational psychology.
understanding of the role of situations in linking per- In N. R. Anderson, D. S. Ones, H. K. Sinangil, & C.
sonality, work behavior, and job performance. Viswesvaran (Eds.), Handbook of work psychology
Industrial and Organizational Psychology: Perspec- (pp. 233–377). New York: Sage.
tives on Science and Practice, 1, 312–316. Hough, L. M., Ones, D. S., & Viswesvaran, C. (1998,
Comrey, A. L. (1988). Factor-analytic methods of scale April). Personality correlates of managerial per-
development in personality and clinical psychology. formance constructs. In R. C. Page (Chair), Person-
Journal of Consulting and Clinical Psychology, 56, ality determinants of managerial potential
754–761. performance, progression and ascendancy. Sym-
Converse, P. D., Oswald, F. L., Imus, A., Hedricks, C., posium conducted at 13th Annual Conference of
Roy, R., & Butera, H. (2008). Comparing personality the Society of Industrial & Organizational Psychol-
test formats and warnings: Effects on criterion- ogy, Dallas, TX.
related validity and test-taker reactions. Interna- Hough, L. M., & Oswald, F. L. (2008). Personality testing
tional Journal of Selection and Assessment, 16, and industrial–organizational psychology: Reflec-
155–169. tions, progress, and prospects. Industrial and Orga-
Cornwell, J. M., & Ladd, R. T. (1993). Power and accuracy nizational Psychology: Perspectives on Science and
of the Schmidt and Hunter meta-analytic procedures. Practice, 1, 272–290.
Educational and Psychological Measurement, 53, Hough, L. M., Oswald, F. L., & Ployhart, R. E. (2001).
877–895. Determinants, detection, and amelioration of
Dudley, N. M., Orvis, K. A., Liebecki, J. E., & Cortina, J. adverse impact in personnel selection procedures:
M. (2006). A meta-analytic investigation of consci- Issues, evidence, and lessons learned. International
entiousness in the prediction of job performance: Journal of Selection and Assessment, 9, 152–194.
Examining the intercorrelations and incremental Houston, J. A., Borman, W. C., Farmer, W. F., & Bearden,
validity of narrow traits. Journal of Applied Psychol- R. M. (2006). Development of the Navy Computer
ogy, 91, 40–57. Adaptive Personality Scales (NCAPS) Technical
Feist, G. J. (1998). A meta-analysis of personality in sci- Report (NPRST-TR-06-2). Millington, TN: Navy Per-
entific and artistic creativity. Personality and Social sonnel Studies, Research and Technology.
Psychology Review, 2, 290–309. Humphreys, M. S., & Revelle, W. (1984). Personality,
Griffith, R. L., & Peterson, M. H. (2008). The failure of motivation, and performance: A theory of the rela-
social desirability measures to capture applicant faking tionship between individual differences and infor-
behavior. Industrial and Organizational Psychology: mation processing. Psychological Review, 91,
Perspectives on Science and Practice, 1, 308–311. 153–184.
Heggestad, E. D., & Gordon, H. L. (2008). An argument Jackson, D. N., Wrobleski, V. R., & Ashton, M. C. (2000).
for context-specific personality assessments. Indus- The impact of faking on employment tests: Does
trial and Organizational Psychology: Perspectives forced-choice offer a solution? Human Perfor-
on Science and Practice, 1, 320–322. mance, 13, 371–388.
332 F.L. Oswald and L.M. Hough

James, L. R. (1998). Measurement of personality via con- Paulhus, D. L. (1998). Manual for the Balanced Inven-
ditional reasoning. Organizational Research Meth- tory of Desirable Responding (BIDR-7). Toronto,
ods, 1, 131–163. ON: Multi-Health Systems.
Jenkins, M., & Griffith, R. (2004). Using personality con- Paulhus, D. L., & John, O. P. (1998). Egoistic and moral-
structs to predict performance: Narrow or broad istic biases in self-perception: The interplay of
bandwidth. Journal of Business and Psychology, self-deceptive styles with basic traits and motives.
19, 255–269. Journal of Personality, 66, 1025–1060.
Johnson, J. W. (2008). Process models of personality and Paunonen, S. V. (1998). Hierarchical organization of
work behavior. Industrial and Organizational Psy- personality and prediction of behavior. Journal of
chology: Perspectives on Science and Practice, 1, Personality and Social Psychology, 74, 538–556.
303–307. Paunonen, S. V., & Ashton, M. C. (2001). Big Five
Johnson, J. W., & Hezlett, S. A. (2008). Modeling factors and facets and the prediction of behavior.
the influence of personality on individuals at work: Journal of Personality and Social Psychology, 81,
A review and research agenda. In S. Cartwright & 524–539.
C. L. Cooper (Eds.), The Oxford handbook of person- Paunonen, S. V., Rothstein, M. G., & Jackson, D. N.
nel psychology. Oxford, UK: Oxford University (1999). Narrow reasoning about the use of broad
Press. personality measures for personnel selection. Jour-
Jones, J. W., & Arnold, D. W. (2008). Protecting the legal nal of Organizational Behavior, 20, 389–405.
and appropriate use of personality testing: A practi- Ployhart, R. E., Lim, B.-C., & Chan, K.-Y. (2001).
tioner perspective. Industrial and Organizational Exploring relations between typical and maxi-
Psychology: Perspectives on Science and Practice, mum performance ratings and the five factor
1, 296–298. model of personality. Personnel Psychology, 54,
Kanfer, R.. (1990). Motivation theory and Industrial/ 809–843.
Organizational psychology. In M. D. Dunnette & L. Roberts, B. W., Bogg, T., Walton, K., Chernyshenko, O., &
M. Hough (Eds.), Handbook of industrial and orga- Stark, S. (2004). A lexical approach to identifying the
nizational psychology. Volume 1. Theory in indus- lower-order structure of conscientiousness. Journal of
trial and organizational psychology (pp.75–170). Research in Personality, 38, 164–178.
Palo Alto, CA: Consulting Psychologists Press. Roberts, B. W., Chernyshenko, O., Stark, S., & Goldberg,
Kemery, E. R., Mossholder, K. W., & Dunlap, W. P. L. (2005). The structure of conscientiousness: An
(1989). Meta-analysis and moderator variables: A empirical investigation based on seven major per-
cautionary note on transportability. Journal of sonality questionnaires. Personnel Psychology, 58,
Applied Psychology, 74, 168–170. 103–139.
Klehe, U.-C., & Anderson, N. (2007). Working hard and Schmitt, N., Oswald, F. L., Friede, A., Imus, A., & Merritt,
working smart: Motivation and ability during typical S. (2008). Perceived fit with an academic environ-
and maximum performance. Journal of Applied ment: Attitudinal and behavioral outcomes. Journal
Psychology, 92, 978–992. of Vocational Behavior, 72, 317–355.
Kuncel, N. R., & Borneman, M. J. (2007). Toward a new Schneider, R. J., Hough, L. M., & Dunnette, M. D.
method of detecting deliberately faked personality (1999). Broadsided by broad traits: How to sink sci-
tests: The use of idiosyncratic item responses. Inter- ence in five dimensions or less. Journal of Organiza-
national Journal of Selection and Assessment, 15, tional Behavior, 17, 639–655.
220–231. Stark, S., & Drasgow, F. (2002). An EM approach to
LePine, J. A. (2003). Team adaptation and post change parameter estimation for the Zinnes and Griggs
performance: Effects of team composition in terms of paired comparison IRT model. Applied Psychologi-
members’ cognitive ability and personality. Journal cal Measurement, 26, 207–227.
of Applied Psychology, 88, 27–39. Stewart, G. L. (2008). Let us not become too narrow.
LePine, J. A., Colquitt, J. A., & Erez, A. (2000). Adapt- Industrial and Organizational Psychology: Perspec-
ability to changing task contexts: Effects of general tives on Science and Practice, 1, 317–319.
cognitive ability, conscientiousness and openness to Tett, R. P., Steele, J. R., & Beauregard, R. S. (2003). Broad
experience. Personnel Psychology, 53, 563–593. and narrow measures on both sides of the personality-
Little, T. D., Lindenberger, U., & Nesselroade, J. R. job performance relationship. Journal of Organiza-
(1999). On selecting indicators for multivariate mea- tional Behavior, 24, 335–356.
surement and modeling with latent variables: When van Iddekinge, C. H., Taylor, M. A., & Eidson, Jr., C. E.
‘‘good’’ indicators are bad and ‘‘bad’’ indicators are (2005). Broad versus narrow facets of integrity: Pre-
good. Psychological Methods, 4, 192–211. dictive validity and subgroup differences. Human
Maydeu-Olivares, A., & Böckenholt, U. (2005). Struc- Performance, 18, 151–177.
tural equation modeling of paired comparison and Voelkle, M. C., Wittmann, W. W., & Ackerman, P. L.
ranking data. Psychological Methods, 10, 285–304. (2006). Abilities and skill acquisition: A latent
Oswald, F. L., & Johnson, J. W. (1998). On the robustness, growth curve approach. Learning and Individual
bias, and stability of statistics from meta-analysis of Differences, 16, 303–319.
correlation coefficients: Some initial Monte Carlo fin- White, L. A., Young, M. C., Hunter, A. E., & Rumsey, M.
dings. Journal of Applied Psychology, 83, 164–178. G. (2008). Lessons learned in transitioning person-
Paulhus, D. L. (1984). Two-component models of ality measures from research to operational settings.
socially desirable responding. Journal of Personality Industrial and Organizational Psychology: Perspec-
and Social Psychology, 46, 598–609. tives on Science and Practice, 1, 291–295.

View publication stats

You might also like