Professional Documents
Culture Documents
Recruitment, Fast and Effective: A Practical Toolkit: July 2019
Recruitment, Fast and Effective: A Practical Toolkit: July 2019
net/publication/334837139
CITATIONS READS
0 948
1 author:
David Wedaman
David Wedaman Consulting
11 PUBLICATIONS 2 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by David Wedaman on 01 August 2019.
A thesis submitted
by
David Wedaman
to
in partial fulfillment of
the requirements for the
degree of
MASTER OF ARTS
in
ORGANIZATIONAL PSYCHOLOGY
_______________________________________________
Aprille Young
Capstone Instructor
_______________________________________________
Carlene Caldwell
Advisor
July 1, 2019
RECRUITMENT, FAST AND EFFECTIVE 2
Abstract
This paper reviews recent meta-analytical studies of research on the effectiveness of tools and
techniques used in recruitment and selection processes. It identifies the key findings of those
studies and proposes a way to apply their lessons as an integrated process, or “toolkit,” that
promises both increased effectiveness and increased efficiency in hiring. It considers the
Table of Contents
CHAPTER 1
Introduction
Hiring the right people is perhaps the most critical task of any organization. If we cannot
find people with the right capabilities, our organization will face an existential crisis: it will not
only be less able to perform in the short term, but it will encounter cycles of vicious effects in
perpetuity. It will be less able to learn, plan, and respond to changes in its environment; it will
have a less effective culture; and it will have fewer reliable leaders to promote.
people—of their cognitive abilities, knowledge, skills, personalities, presentation style, social
skills, lived experience, vocabulary, flexibility, and so on. Any element of a person that might
impact on their work can figure into the equation. This broad evaluation is weighed against an
equally challenging assessment of the organization and its needs—the role, the expectations, the
culture, the collective assumptions of people working there, and so on. It could be argued that
both of these assessments are “over our heads,” to use Kegan’s (1995) famous phrase—that is,
beyond our general capacity to manage. It is not easy to know individual people well; it is less
easy to know interrelated systems of people—organizations—well; and, finally, fitting the one to
the other is perhaps most difficult of all. Nonetheless we need to hire, so the question becomes
how best to support the recruitment professionals facing this complex challenge.
Of course, hiring is not the only crucial organizational function; for that reason, it not
only needs to be done well but efficiently. However, the activities most commonly used in
recruitment and selection processes—resume review, phone screens, interviews, on-site visits,
and negotiation—are time-consuming. In part because it is time consuming, hiring is also costly,
RECRUITMENT, FAST AND EFFECTIVE 5
regardless of whether the process is done well or poorly. Blatter et al. (2012) estimates that the
cost of hiring skilled workers is the equivalent of 10 – 24 weeks of salary. And Fatemi (2016)
cites a variety of sources to suggest that costs of bad hires can range from a conservative figure
employee. Given these figures, a process that can be more effective and efficient—delivering
candidates quickly who are most likely to perform well—seems to be desperately needed.
We might expect such an important and yet challenging context to engender a variety of
innovative techniques or experiments, as organizations look for better ways to hire well and
quickly, but a quick scan of the hiring landscape suggests otherwise: the process recommend by
the Society for Human Resource Management (“Hiring Policy and Procedures,” n.d.), for
example, essentially describes the basic procedure followed across all industry sectors: a resume
review succeeded by rounds of interviews and reference checks. And of these steps, the
technique perhaps most commonly identified with selection, the unstructured interview, is
considered to be notoriously bad at finding the most qualified applicants. Moore (2017) claims
that the interview is “terrible for predicting future job performance,” (p. 8) and Bohnet (2016)
states that it is “among the worst predictors of actual on-the-job performance” (para. 1). Why?
Because traditional interviews allow for our biases to interfere with our assessment of candidates
(Bohnet, 2016).
Fortunately, research has continued to be done on which hiring practices and assessments
are most effective at both predicting job performance and weeding out biases. In recent decades
researchers have begun to deploy meta-analytic techniques to look for patterns across what now
amounts to several decades worth of research (Schmidt & Hunter, 1998; Schmidt, Oh, & Shaffer
2016). These recruitment process meta-analyses show unambiguously which hiring techniques
RECRUITMENT, FAST AND EFFECTIVE 6
are most effective in finding people who will be successful in the job. In fact, their findings
suggest that a complete rethinking of the hiring process can be done, to focus it on the most
effective processes (Schmidt & Hunter, 1998; Schmidt et al., 2016). This redesigned process
promises to be both more effective overall, and, because it is simple, more efficient. It is,
therefore, a possible solution to the hiring challenge, in that it offers a simple model that
The two goals of this paper are to: (1) to review the meta-analyses of hiring practice
research, and (2) to consider what it would mean to implement their findings in real-world hiring
processes. To do the latter, we describe a redesigned hiring process built around the tools
recommended by the meta-analyses, which we call a “toolkit,” and we reflect, from the
perspective of hiring practitioners, on what it would mean to put such a toolkit into place.
RECRUITMENT, FAST AND EFFECTIVE 7
CHAPTER 2
Literature Review
It is impossible within the scope of this paper to review all the literature relevant to the
application and evaluation of the primary psychometric assessments and selection procedures
used in recruiting processes. Fortunately, two excellent and topical meta-analyses, which are
& Hunter, 1998; Schmidt et al., 2016). They provide a way to quickly review a much broader
field of research in a paper of small scope, which is the goal in this literature review. Both meta-
analyses discuss three particular techniques in detail: structured interviews, tests of general
cognitive ability (GCA), and integrity tests. They also place heavy emphasis on job performance
Job Performance
This paper assumes, with much of the literature, that the best way to measure selection is
with regard to how well the selected individuals perform in the job. There are other ways to
assess hiring—in terms of how well the individual selected fits the organization’s culture, for
instance. But because organizations exist primarily to do things, this paper assumes the benefit to
the organization of people who do the right things well is higher than any other measurable
hiring metric. Research on the effect of variation in job performance supports the importance of
emphasizing this criterion—Schmidt (2009) reports that in unskilled jobs top tier performers
were three times as productive as bottom tier performers, and that in skilled jobs top performers
were 15 times as productive. Given these figures, building a hiring process that selects with a
psychometric research and is consistently shown to predict job performance better than any other
metric (Schmidt, 2002). GCA even appears to be the underlying cause of the positive scores of
other highly-ranked assessments, such as job skills tests (Schmidt, 2002). How does GCA have
such an effect on job performance? Schmidt (2009) explains that GCA is best understood as the
“ability to learn,” and that people with higher GCA simply learn job-related knowledge faster (p.
4). This on-the-job learning appears to be crucial to success in every job, regardless of context or
complexity.
The use of GCA has one downside, however, known as “adverse impact.” This is the
That minorities perform less well on intelligence tests is a commonly-known characteristic of the
genre, and is even seen in tests that attempt to use alternative formats in order to achieve fairness
(Hausdorf, LeBlanc, & Chawla, 2003). Schmidt (2002) suggests the fault does not lie in the
tools: they are rather a kind of unbiased lens revealing the effects of broader structural inequality
in society at large. Byington and Felps (2010) see larger, “self-fulfilling” effects playing across
the ecosystems of school and work in a vicious cycle that spans generations: assessments of
intelligence of different kinds limit the allocation of educational resources, these limited
resources influence who gets the best jobs, and those in the best jobs send their children to the
best schools. It seems reasonable to assume with Byington and Felps (2010) and Schmidt (2002)
that larger, societal forces are involved, yet we still need to hire. Hausdorf, LeBlanc, and Chawla
(2003) summarize the core tension here--to use intelligence tests causes adverse effects, but not
to use intelligence tests reduces the effectiveness of selection. The question becomes how to
RECRUITMENT, FAST AND EFFECTIVE 9
make use of the predictive validity of GCA while mitigating its social consequences. There is
some research to suggest that adverse impact can be mitigated by using particular interpretive
methods, such as, for example, viewing results in the context of the norms of a given population
group (Schmidt, 1985); this paper will explore another approach, which will be discussed in the
next section.
Integrity
Another way to mitigate adverse impact is by complementing GCA assessment with tests
that have no adverse effect, such as tests of integrity (Schmidt, 2002), which measure skills that
show equal distribution among all demographics (Ones, Viswesvaran, & Schmidt, 1993).
Already widely used in selection processes (U.S. Office of Technology Assessment, 1990),
integrity tests attempt to predict whether test-takers are likely to participate in Counterproductive
Work Behaviors (CWB) and have gained in popularity since the early 1980s (Ones et al., 1993).
They fall into two categories: “overt” and “personality” (Sacket et al, 1989). The first category
asks direct questions about the test-taker’s opinions of CWB; the second category, also known as
“veiled purpose” tests (U.S. Office of Technology Assessment, 1990), attempts to measure test-
taker attitudes indirectly, using questions that don’t explicitly refer to CWB. Though some
researchers privilege one test style over the other (Frost & Rafilson, 1989), meta-analytical
studies suggest that both seem roughly equal in effectiveness (Ones et al., 1993). Importantly for
this paper, although they are designed to predict CWB, integrity tests have also been shown to
There is some mystery about what integrity tests measure, but the consensus seems to be
Sackett, and Wiemann (2007) discuss several ideas: integrity seems to be a compound trait that
RECRUITMENT, FAST AND EFFECTIVE 10
correlates strongly with three aspects of the Big Five personality traits—Conscientiousness,
Agreeableness, and Emotional Stability—but it also seems to draw on something else, a possible
personality dimensions help explain why integrity tests predict job performance—because the
traits also predict job performance. For example, reviewing a broad range of research, Barrick
and Mount (2009) find that Conscientiousness and Emotional Stability alone predict a
behaviors, the absence of counter-productive work behaviors, teamwork and customer service,
longevity in the organization, low absenteeism, leadership, and regulatory compliance. They also
predicted motivation, organizational commitment, and dependability (Barrick & Mount, 2009).
Notably, integrity tests do not produce adverse impact (Ones et al., 1993), so including
them in a selection process with other selection criteria, such as intelligence tests, renders that
process more fair. Ones, Viswesvaran, & Schmidt (1993) describes a scenario in which a
combined selection process of intelligence tests and integrity tests resulted in an increase of
Structured Interviews
The structured interview technique differs significantly from tests of intelligence and
governs the behavior of the interviewers themselves as they conduct the interview that forms the
heart of most hiring processes. Structured interview protocol asks the interviewers to basically do
four things: select their interview questions in advance; think about the range of responses they
might get and consider which kinds of answers might be more or less “right;” have the discipline
to consistently ask the same questions of each candidate in the same order; and collectively
RECRUITMENT, FAST AND EFFECTIVE 11
review candidate answers with respect to the rating schema they formulated in advance. Moore
(2017), Bohnet (2016), and Stevens (2009) offer summaries of standard structured interview
processes. Despite the structured interview’s simplicity, the process seems to be seldom used,
which Bohnet (2016) chalks up to managers’ tendency to think highly of their own intuitive
What is noticeable about this process is how much it resembles aspects of qualitative and
reliability, etc. This may suggest why it is successful. If we are asking the same questions of
extract to some degree our own biases and contextual artifacts and see better where people are
similar and where they divulge (Bohnet, 2016). The differences we perceive are more about
meaningful differences among the individuals themselves than about bias-filled data gathered
from an unstructured process. Stevens (2009) supports this point in distinguishing between the
advantage of a “rapid, largely unconscious, implicit approach” that requires less effort but is
open to more bias, whereas structured interviews aim to support a “slower, deliberative,
controlled approach” which allows “conscious” weighing of options (Stevens, 2009, p. 42).
Meta-analysis 1
common in hiring and sort them by their ability to predict job success (Schmidt & Hunter, 1998).
The results show a clear distinction between assessments that will help us find productive
employees and assessments that will not help at all (Schmidt & Hunter, 1998). The motivation
RECRUITMENT, FAST AND EFFECTIVE 12
for this analysis is that such a vast difference in predictive validity in assessment techniques can
be converted into huge savings in hiring process cost and dramatic improvement in work
performance and organizational effectiveness, should the more effective options be used and the
The top three assessments appropriate to entry-level selection are tests of general
cognitive ability (GCA), with a predictive validity of .51; structured interviews, also at .51; and
integrity tests at .41 (Schmidt & Hunter, 1998). Some things, conversely, are shown to be
extremely poor predictors of job performance: years of education (.10); interests (.10), and
graphology (.02) (Schmidt & Hunter, 1998). A key finding worth particular note is that
combining GCA and other tests can create even higher predictive validity. GCA combined with
integrity tests nets a .65 validity, and GCA combined with structured interviews earns .63
(Schmidt & Hunter, 1998). The significance of these last figures is really hard to overemphasize:
that we can use two assessments and achieve nearly 70% confidence that we are hiring the right
person is phenomenal. Such numbers cannot fail to impress any professional in the hiring or
talent management fields. These findings are relevant in several ways. They underscore the
importance of tests of general cognitive ability and of tests of integrity, and they add the deep
statistical support only a meta-analysis of similar magnitude can offer. The remarkable predictive
validity of combined assessments helps make the case that a few well-chosen assessments really
Meta-analysis 2
Schmidt, Oh, and Shaffer (2016) reconsider the meta-analysis of Schmidt and Hunter
(1998), add data from the intervening years, apply new statistical techniques, expand their list of
techniques measured to 31, and generate a new list of assessments used in hiring processes
RECRUITMENT, FAST AND EFFECTIVE 13
ranked by their ability to predict job success. The new data and statistical techniques Schmidt,
Oh, and Shaffer (2016) draw on result in changes in validity ratings of most of the assessments
measured. The top techniques are now found to be even more predictive: GCA tests (.65),
integrity tests (.46), and interviews (.58) (Schmidt et al., 2016). Among the new weak predictors
are age (0), person-organization fit (.13), and extraversion (.09) (Schmidt et al., 2016). Again, the
authors pair techniques and consider whether that increases the overall predictive validity, and
here the changes are striking. A GCA test paired with an integrity test now earns a .78 validity,
and a structured interview paired with a GCA test earns a .76 (Schmidt et al., 2016). It’s worth
noting that in situations where job performance can be measured objectively, GCA has an even
higher predictive validity: .84 (Schmidt, 2009). So, the shockingly predictive pairings noted in
the first article are even more shocking in this the second: combining the top two assessments
now generates close to an 80% chance of selecting the best candidate. Almost 20 years of data do
not change the importance of integrity tests and tests of GCA for a hiring process that aims at
predictive validity with respect to job performance. The data also continue to support the idea
that the third possible component of my assessment “toolkit” should be structured interviews.
Wonderlic
Which actual, real-world tools would we put in our toolkit? With regards to intelligence
tests, there are numerous options available. Byington and Felps (2010) list four of the most
common: Raven's Progressive Matrices, the Armed Services Vocational Aptitude Battery,
Wechsler's Adult Intelligence Scale, and the Wonderlic Intelligence Test. Reynolds (1989) cites
the Wechsler scales, the Kaufman scales, and the Stanford-Binet Intelligence Scale, though it is
noted that Stanford-Binet is losing popularity. Schmidt singles out the Wonderlic assessment as
an example of intelligence tests (2009). Because any of these key tests have been used in
RECRUITMENT, FAST AND EFFECTIVE 14
research studies and in practice for decades, it seems they are almost equal as regards their
contribution to adding significant predictive validity any hiring toolkit. The question becomes
one of practical facility: which are the easiest to use, the quickest to take, or the easiest to
The most useful test for our purpose is the Wonderlic. At almost 100 years old, this
“classic of cognitive ability testing” has been administered in the hundreds of millions of times,
takes twelve minutes, and costs little (“Choose your Weapon: Intelligence Tests,” 2006). It
consists of 50 standardized questions with multiple choice answers, with questions increasing in
difficulty as you continue on in the test (Hovland & Wonderlic, 1939; Schmidt, 1985). A test-
taker’s score is determined by how many answers they manage correctly in twelve minutes, but
the test can be taken without time limitation (Hovland & Wonderlic, 1939). New hiring service
vendors make it easy to integrate into existing hiring systems (Wonderlic, n.d.). The Wonderlic
is perhaps most famous for its use in American professional football talent evaluation, where
Pitts and Evans (2018) continue to attest to its validity in predicting job performance, but it has
been a staple of research on intelligence testing for decades. Its results are comparable to tests
taking much more investment of time to take and administer (Dodrill, 1981). Dodrill (1981) finds
it useful, accurate, relatively unbiased, and easy to administer. Dodrill (1983) confirms that its
results to be stable over time. Dodrill and Warner (1988) argue for the Wonderlic’s effectiveness
as a measure of general intelligence, especially in conditions that require rapid diagnosis, such as
Stanton Survey
There are also numerous options for integrity tests. Ones et al. (1993) noted there were 43
available even as long as 30 years ago. Berry, Sackett, and Weimann (2007) list several popular
RECRUITMENT, FAST AND EFFECTIVE 15
options: for personality-focused tests of integrity, they list the Personnel Reaction Blank, the PDI
Employment Inventory (PDI-EI), and the Reliability Scale of the Hogan Personality Series. For
the overt tests, they note Personnel Selection Inventory (PSI), the Reid Report, and the Stanton
Survey. It is somewhat difficult to find comparisons of the relative strengths and designs of
popular tests, given that much of the literature on each test is only available from the testing
companies that provide the tests as a for-profit service. However, Wanek, Sackett, and Ones
(2003) conducted a particularly painstaking item-level analysis of 7 popular tests. Tests were
composites” or core ideas, such as “theft admissions,” “social conformity,” and “risk taking”
(Wantek, Scakett, & Ones, 2003, pp. 878-885). Their analysis suggests very fine distinctions
between individual tests and between overt or personality-focused integrity test genres, but it
also confirms that there is much correspondence in general among tests and between them and
the core personality factors they are thought to measure: Conscientiousness, Emotional Stability,
Selecting among the most popular of the intelligence tests would seem to be, therefore,
similar to selecting among intelligence tests. Any of the seven measured by Wanek et al. (2003)
tests, and ease of use. To the first question, the overt-style integrity test is considered preferable
in this paper because it makes its motivation obvious, and transparency of motivation is in line
Code,” n.d.). To the second question, an informal review of the recommended overt-style
RECRUITMENT, FAST AND EFFECTIVE 16
integrity test providers suggested the Stanton Survey to be an easy online experience for an HR
professional to adopt. The Stanton Survey uses 74 dichotomous (either/or) and multiple-choice
determination of the level of risk the organization will take on in employing the individual—low,
marginal, or high (Harris, 1985). It has been used for over 50 years (“Stanton Survey 11.0,” n.d.)
and takes 9-15 minutes to complete (Plotkin Group, n.d.). Chockalingam and Ones (1997) found
the assessment reliable, to have “substantial utility,” and to have no adverse impact. Grimsley
(1987) also found it reliable. With respect to the analysis by Wanek, Sackett, and Ones (2003),
the Stanton Survey correlated more highly with the four principle components than did the other
tested overt-style tests, and correlated highly with virtually all 23 thematic composites,
(1985) considers the Stanton Survey be reliable, valid, and useful, and to assess honesty in a
multidimensional way.
RECRUITMENT, FAST AND EFFECTIVE 17
CHAPTER 3
Discussion
Having reviewed the recent meta-analyses of hiring practice research and related
literature, it now remains to consider what it would mean to implement the findings from the
literature in real-world hiring processes. To do this, this paper proposes the formation of a
hypothetical recruitment “toolkit.” the ingredients of our proposed recruitment toolkit are as
follows: the Wonderlic test of general cognitive ability; the Stanton Survey test of integrity; and
With respect to the application and integration of these tools, the tests of cognitive ability
and integrity are generally now online; our process will call for them to be taken by individual
applicants on their own time, so one remarkable consequence to our process is that any time
these tests take to be administered is largely external to it. What will occupy most of the time of
hiring professionals using our toolkit will be the design and implementation of structured
interviews. However, these time-consuming interviews will be limited in scope: we only need to
interview those candidates who do well in the online tests of general cognitive ability and
integrity. Because these two tests have a combined .78 predictive validity, they let us know as
well as is humanly possible that the top scorers will all be able to do the job. Therefore, we can
pick a very few finalists, rest assured that any will perform well, and rely on the structured
interviews to allow us to identify whatever individual differences we might want to use to further
distinguish among them. The Appendix contains a simple 4-step workflow for applying this
toolkit.
RECRUITMENT, FAST AND EFFECTIVE 18
It is important to note that this process excludes certain features of traditional hiring
processes, in ways that may appear to be radical departures from the norm. For example, it does
not include resume-reviewing or phone-screening steps, because what these steps assess--work
experience, age, education, graphic design skills, etc., are shown by our meta-analyses to add no
predictive validity of note. Interested applicants can simply apply and take the online
assessments before hiring professionals even need to engage them: the time taken to review
resumes, conduct phone screens, and do initial interviews can therefore be removed from the
included in later steps after the initial assessments, and thus cost significantly less, because it will
Cost Savings
It is worth taking a moment to consider the relative costs of the toolkit: these are difficult
to calculate precisely, because volume and organization size factor into the costs of intelligence
and integrity tests, and because test providers do not make these costs easy to discover, but it is
safe to say that the toolkit will cost significantly less than conventional search processes. A first-
pass review of the online material provided by the vendors of intelligence and integrity tests
suggests we can expect per-applicant costs to be in the range of $10 per assessment. This can
seem like a lot in a job search with hundreds of candidates, but it is important to remember that
the largest cost category in any job search is staff time: the amount of time HR and other
with emails, calling references, and so on. A simple calculation can show the time savings the
toolkit proposes.
RECRUITMENT, FAST AND EFFECTIVE 19
four basic steps and 100 applicants to be considered. A first step, resume review, would take 10
hours, if each resume were considered for 6 minutes. A phone screening second step might take
another 15 hours, if we assumed 30 candidates were screened for 30 minutes each. A third step,
phone interviews, would take 11.25 hours if 15 candidates were each interviewed for 45 minutes.
And a final, on-site interview step would take the most time of all. If five candidates were
interviewed for 2 hours each by 6 staff members total, that would require 60 hours of staff time.
In sum this simple process would require 96.25 hours of staff time, even leaving out the
calls, arranging interviews, and so forth. In contrast, the toolkit proposed in this paper would
require only the time necessary to set up and conduct structured interviews. With its higher
predictively validity, fewer final interviews would be needed: two candidates interviewed by 6
staff each for two hours would require 24 hours of staff time, or around 25% of the time invested
Likely Users
Next, we address the question of what kinds of organizations would be most interested in
using our toolkit. In principle any organization might use it: certainly, the structured interview
format is available to any organization, and the tests of general cognitive ability and integrity are
recommend some organizations as prime targets, and you might think of these as making part of
the “mindset” of the organization. We suspect this toolkit will appeal to organizations that are
dissatisfied with the quality of recruits provided by their hiring processes; to organizations that
wish to make their hiring processes more efficient; to organizations willing to experiment to
RECRUITMENT, FAST AND EFFECTIVE 20
gather data on new ways of hiring; to organizations, such as start-ups, that have no established
system in place; and, finally, to organizations that value the use of processes validated by
copious research over an intuitive sense of how things ought to be done. What might be as
important as these organizational preconditions are the stances of the staff overseeing
recruitment: HR or recruitment officers new in their positions with a mandate for change are
likely looking for the equivalent of this toolkit and may therefore be open to it.
Experimental Frame
because our toolkit effectively eliminates or ignores a variety of established techniques; it will
therefore undoubtedly meet resistance on a many levels. First, people who feel their current
methods work well will likely be uninterested in changing those, unless they happen to be
naturally inclined to constantly explore improvement areas across their areas of responsibility, on
the model of constant “slack search” (March, 1994). On a second level, people may feel insecure
about using such a pared-down process--simplicity itself brings a certain risk--if it does not
work, they will naturally expect to bear the consequences of that failure, all the more visible for
the simplicity of process that generated it. Finally, the epistemological assumptions of the toolkit
may also raise issues—it assumes that hiring processes should be based on research-proven
techniques, that it is appropriate to use theory to improve practice, and that it is appropriate for
organizations to learn. HR staff unfamiliar with academic research may be intimidated by its
methods, wary to allow a precedent in which theory is given some influence over their workflow,
and may dislike the personalities and approach of “know it all” researchers.
For all these reasons, I think the best approach for implementing this toolkit is to think of
its use as a trial or experiment: a small, unthreatening proof-of-concept exercise designed not to
RECRUITMENT, FAST AND EFFECTIVE 21
change everything but simply to allow organizations to learn more about a potential new
“batches.” Organizations might use the toolkit for one or two searches, for instance, and keep
careful notes about how it works, what kinds of candidates it delivers, and whether or not those
candidates succeed. It is true that to do this suggests the organization may need to build ways to
“observe” and measure the hiring process that often do not exist for their traditional recruitment
processes, but that kind of reflexivity is a good thing. Another way to constrain the process and
consulting firm might be hired to manage the process; or a consortium of businesses might
partner for the purpose; in this way there is some “loose linkage” (March, 1994) between the
radical implications of the process and the organizational disruption they might suggest, that
gives those implications space to be understood in their own right. Finally, a third way to ease
into the use of the toolkit is to use it not to assess new applicants, but rather to assess current
employees. Existing employees are in effect a record of past hiring processes. We can invite
them to participate in our toolkit and compare the ranking it makes of them with our own data on
their performance in our organization to see if our toolkit predicts actual performance differences
One consequence of the hiring toolkit proposed here is that it is likely to deliver non-
traditional candidates. Because it is not using the normal processes to screen candidates—phone
existing biases in selection, this toolkit ought to take seriously candidates who would not be
ranked highly by traditional searches. It ought to return candidates who have high general
RECRUITMENT, FAST AND EFFECTIVE 22
cognitive ability, high integrity, and who compare favorable in structured interviews, without
regard for their job experience, age, personal grooming, beauty, person-organization fit,
extroversion, or any of the normal conscious and unconscious criteria used in established hiring
processes. This is to be seen as an advantage, in that the toolkit should help to reduce inevitable
and invisible built-in biases about the kinds of people who “belong” in a given organization that
might be held by the hiring officers. Indeed, it should deliver on the important, but long dormant
promise hiring has always held for supporting innovation in a company by bringing people in
with slightly different perspectives (March, 1994). It should be noted that another consequence
of this may be organizational disruption. These competent, engaged staff with ideas of their own
may actually begin to innovate and challenge the organization to improve in ways it does not
On a related note, Schmidt (2002) takes up the question of whether there is any point at
which a given person is too intelligent for a job. Can you be such a good learner that you actually
perform less well? His answer is no: that statistically speaking, there is no end to the direct
relationship between GCA and job performance. As GCA goes up, so does performance.
Schmidt (2002) sees two reasons for this, and both are interesting: first, a person high on GCA
will not select a job they are not interested in (they are capable of making sophisticated
assessments of what they can do and where they belong); and second, they will tend to remake
the job in their image, regardless of how it is conceived originally (Schmidt, 2002).
One interesting characteristic shared by the three selection methods used in our toolkit is
that all three can be used in a simple automated, computer-based process. Intelligence and
integrity tests draw on fixed questions that ask for machine-scorable answers, and even
RECRUITMENT, FAST AND EFFECTIVE 23
structured interviews could be “computerized,” precisely because they are structured—the same
questions are asked of each candidate. As a result, it is easy to conceive of an automated hiring
toolkit that might exist online or in an app on a mobile platform. Interview questions could be
programmed in advance, and candidates could then be directed to the automated kit without any
of the normal preliminaries. Hiring professionals might not need to even engage with any
applicants until they had been preselected by the computerized process. It is assumed that most
companies would like to meet potential new team members in person before hiring them, but
given the predictive validity of our hiring techniques, a case could be made for automatically
design. The hiring process might not need to seem like a hiring process at all: it might take the
form of a game, or of conversations with an AI-programmed “bot,” etc. This possibility holds out
the potential to make the process more engaging or enjoyable for the applicant. Seseri (2018)
Finally, it is not a far leap to advance from the idea of automating selection for one
simultaneously. Because GCA and integrity predict success in any job, an applicant could be, in
theory, “pre-approved” for a variety of jobs by one centralized, computerized process, just as
mortgage applicants are now for home loans. Indeed, applicants might go through the screening
process and then receive several immediate job offers from participating companies. It is outside
of the scope of this paper to speculate further on the possibility of automated hiring or pre-
selection processes except to note that the potential here seems very real.
RECRUITMENT, FAST AND EFFECTIVE 24
Conclusion
This paper attempts to review relevant aspects of the literature on hiring processes, with a
view to understanding which aspects of recruitment are the most predictive of job performance.
The research is demonstrably, even astonishingly, unambiguous, prioritizing three key processes.
It is a logical next step to imagine a toolkit built of those processes. This paper imagines such a
toolkit and discusses the ramifications of using it, including dipping into the hidden depths of the
hiring professional’s mind. It is hoped that this paper will find its way into the hands of hiring
managers interested in processes that are more effective, cost less, and take less time, and that it
might serve as a kind of road map as they think about ways to improve what they do. It is also
hoped that organizations using this process will find themselves in the delightful discomfort of
Future Research
The most compelling area of future research relates to practical applications of this
toolkit. Real-world trials of hiring processes built around intelligence, integrity, and structured
interviews are needed, as well as shared data about their costs, affordances, and outcomes.
Longitudinal research tracking employees hired through these and traditional processes would be
especially helpful, to determine if performance were indeed correlated to our criteria. It would be
ideal if this research were undertaken in the relatively transparent context of professional and
academic research. There is perhaps some reason to fear it may already be serving to inform new
products and services from proprietary vendors of hiring services, which are sadly inscrutable
A second area of future research is more provocative yet: and that is to determine
whether there are still more effective ways to assess those qualities of mind that predict job
RECRUITMENT, FAST AND EFFECTIVE 25
performance than tests of general cognitive ability and integrity. One particularly interesting area
emerges from the tradition of cognitive developmental theory. This line of research describes
thinking in terms of stages of hierarchical complexity (Commons, Miller, Ramakrishnan, & Giri,
2018); assessment by these stages has been shown in early studies to be predictive of job success
in diverse categories, and the assessment appears to have minimal adverse impact (Commons et
al., 2018). Featherston et al. (2016) seems to be among the first of what ought to be a trend of
References
Barrick, M. R., & Mount, Michael K. (2009). Select on conscientiousness and emotional
Berry, C. M., Sackett, P. R., & Wiemann, S. (2007). A review of recent developments in
Blatter, M., Muehlemann, S., & Schenker S. The costs of hiring skilled workers. European
Bohnet, I. (2016). How to take the bias out of interviews. Harvard Business Review Digital
interviews
Byington, E., & Felps, W. (2010). Why do IQ scores predict job performance? An alternative,
Campion, M. A., Pursell, E. D., & Brown, B. K. (1988). Structured interviewing: Raising the
42.
Choose your Weapon: Intelligence Tests. (2006, August). Inc. Magazine. 96.
Commons, M. L., Miller, P. M., Ramakrishnan, S., & Giri, S. (2018). Employee management
Dodrill, C. B., & Warner, M. H. (1988). Further studies of the Wonderlic Personnel Test as a
brief measure of intelligence. Journal of Consulting and Clinical Psychology, 56(1), 145–
147.
Fatemi, F. The true cost of a bad hire – It’s more than you think. (2016, September). Forbes.
bad-hire-its-more-than-you-think/#5f8df2634aa4
Featherston, K. G., Chen, S. J., Toth-Gauthier, M., Day, J. M., Herman, P., Laverdeur, R., ... &
Complexity (MHC): Why might stage be a better measure of “smarts” than verbal
Frost, A. G., & Rafilson, F. M. (1989). Overt integrity tests versus personality-based measures of
277.
Grimsley, D. L. (1987). Test-retest reliability of the Stanton Survey honesty test. Psychological
Society for Human Resource Management (n.d.).. Hiring Policy and Procedures. Retrieved from
https://www.shrm.org/resourcesandtools/tools-and-
samples/policies/pages/cms_001677.aspx
Harris, W. G. (1985). [Test review of the Stanton Survey]. J. V. Mitchell, Jr. (Ed.), The ninth
Hovland, C. I., & Wonderlic, E. F. (1939). Prediction of industrial success from a standardized
Kegan, R. (2003). In over our heads: The mental demands of modern life. Cambridge, Mass:
LeBlanc, P. A. H. M. M., & Chawla, A. (2003). Cognitive ability testing and employment
selection: does test content relate to adverse impact? Applied HRM Research, 7(2), 41-48.
Maeda, J. (2009). Academia vs. industry: the difference is in the punctuation marks. Harvard
March, J. G., & Heath, C. (1994). A primer decision making: How decisions happen. New York,
McGrew, K. S., & Flanagan, D. P. (1998). The intelligence test desk reference (ITDR): Gf-Gc
Moore, D. A. (2017). How to improve the accuracy and reduce the cost of personnel
integrity test validities: Findings and implications for personnel selection and theories of
Pitts, J. D., & Evans, B. (2018). Evidence on the importance of cognitive ability tests for NFL
quarterbacks: what are the relationships among Wonderlic scores, draft positions and
4270(73)00002-X.
Sackett, P. R., Burris, L. R., & Callahan, C. (1989). Integrity testing for personnel selection: An
Schmidt, F. L. (1985). [Test review of Wonderlic Personnel Test]. J. V. Mitchell, Jr. (Ed.), The
ninth mental measurements yearbook. Lincoln, NE: Buros Center for Testing.
Schmidt, F. L. (2002). The role of general cognitive ability and job performance: Why there
Schmidt, F. L., & Hunter, J. E. (1998). The validity and utility of selection methods in personnel
Schmidt, F. L., & Hunter, J. E. (2009). Select on intelligence. In E. A. Locke (Ed.), Handbook of
principles of organizational behavior, (pp. 3-18). West Sussex, UK: John Wiley & Sons.
Schmidt, F. L., Oh, I. S., & Shaffer, J. A. (2016). The validity and utility of selection methods in
https://www.researchgate.net/publication/309203898
RECRUITMENT, FAST AND EFFECTIVE 30
Seseri, R. (2018, January 29). How AI is changing the game for recruiting. Forbes. Retrieved
from https://www.forbes.com/sites/valleyvoices/2018/01/29/how-ai-is-changing-the-
game-for-recruiting/#6a9639c71aa2
Plotkin Group (n.d.). Stanton survey: a trusted & proven integrity screening tool. Retrieved from
https://www.plotkingroup.com/services/hiring-assessment/integrity-survey/
Stevens, C. K., (2009). Structure interviews to recruit and hire the best people. In E. A. Locke,
(Ed.), Handbook of principles of organizational behavior, (pp. 41-56). West Sussex, UK:
US Congress, Office of Technology Assessment. (1990). The use of integrity tests for pre‐
https://digital.library.unt.edu/ark:/67531/metadc39672/m2/1/high_res_d/9042.pdf
Van Iddekinge, C. H., Roth, P. L., Raymark, P. H., & Odle-Dusseau, H. N. (2012). The criterion-
Viswesvaran, C., & Ones, D. S. (1997). Review of the Stanton survey. Security Journal, 8(1-2),
167-169.
Wanek, J. E., Sackett, P. R., & Ones, D. S. (2003). Towards an understanding of integrity test
Appendix A
1. HR Preparation
management systems to the providers, and confirm the tests work and the results
a. For each search, HR staff and hiring managers or other stakeholders review the
structured interview process, design questions (or select them from previous
hiring processes), consider the kind of answers they will return, and collectively
a. After job posting, applicants are automatically informed about the process, given
b. General cognitive ability and integrity test results are received and sorted;
candidates at the top of the list, considered in conjunction with demographic and
4. Structured Interviews
RECRUITMENT, FAST AND EFFECTIVE 32
a. Structured interviews are organized and held. Hiring committees review their
findings and prioritize the candidates they have interviewed. The finalist is