Download as pdf or txt
Download as pdf or txt
You are on page 1of 33

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/334837139

Recruitment, Fast and Effective: A Practical Toolkit

Thesis · July 2019


DOI: 10.13140/RG.2.2.10178.73928

CITATIONS READS
0 948

1 author:

David Wedaman
David Wedaman Consulting
11 PUBLICATIONS   2 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Learning Organization Project View project

Reducing Bias in Hiring Project View project

All content following this page was uploaded by David Wedaman on 01 August 2019.

The user has requested enhancement of the downloaded file.


Running head: RECRUITMENT, FAST AND EFFECTIVE 1

RECRUITMENT, FAST AND EFFECTIVE: A PRACTICAL TOOLKIT

A thesis submitted

by

David Wedaman

to

William James College

in partial fulfillment of
the requirements for the
degree of

MASTER OF ARTS
in
ORGANIZATIONAL PSYCHOLOGY

This capstone thesis has been


accepted for the faculty of
William James College by:

_______________________________________________
Aprille Young
Capstone Instructor

_______________________________________________
Carlene Caldwell
Advisor

July 1, 2019
RECRUITMENT, FAST AND EFFECTIVE 2

Abstract

This paper reviews recent meta-analytical studies of research on the effectiveness of tools and

techniques used in recruitment and selection processes. It identifies the key findings of those

studies and proposes a way to apply their lessons as an integrated process, or “toolkit,” that

promises both increased effectiveness and increased efficiency in hiring. It considers the

practical and psychological ramifications to hiring professionals and to organizational

effectiveness of implementing the toolkit.


RECRUITMENT, FAST AND EFFECTIVE 3

Table of Contents

CHAPTER 1: Introduction ............................................................................................................. 4


The Hiring Challenge .................................................................................................................. 4
CHAPTER 2: Literature Review .................................................................................................... 7
Job Performance.......................................................................................................................... 7
Integrity ....................................................................................................................................... 9
Structured Interviews ................................................................................................................ 10
Meta-analysis 1 ......................................................................................................................... 11
Meta-analysis 2 ......................................................................................................................... 12
Wonderlic .................................................................................................................................. 13
Stanton Survey .......................................................................................................................... 14
CHAPTER 3: Discussion .............................................................................................................. 17
Description of Toolkit and Its Application ............................................................................... 17
Cost Savings.............................................................................................................................. 18
Likely Users .............................................................................................................................. 19
Experimental Frame .................................................................................................................. 20
Diversity and Transformation ................................................................................................... 21
Automation, Gamification, and Pre-selection ........................................................................... 22
Conclusion ................................................................................................................................ 24
Future Research ........................................................................................................................ 24
References ..................................................................................................................................... 26
Appendix A: Hiring Toolkit Timeline .......................................................................................... 31
RECRUITMENT, FAST AND EFFECTIVE 4

CHAPTER 1

Introduction

The Hiring Challenge

Hiring the right people is perhaps the most critical task of any organization. If we cannot

find people with the right capabilities, our organization will face an existential crisis: it will not

only be less able to perform in the short term, but it will encounter cycles of vicious effects in

perpetuity. It will be less able to learn, plan, and respond to changes in its environment; it will

have a less effective culture; and it will have fewer reliable leaders to promote.

Hiring is no simple task, however. It is an exercise in the comprehensive assessment of

people—of their cognitive abilities, knowledge, skills, personalities, presentation style, social

skills, lived experience, vocabulary, flexibility, and so on. Any element of a person that might

impact on their work can figure into the equation. This broad evaluation is weighed against an

equally challenging assessment of the organization and its needs—the role, the expectations, the

culture, the collective assumptions of people working there, and so on. It could be argued that

both of these assessments are “over our heads,” to use Kegan’s (1995) famous phrase—that is,

beyond our general capacity to manage. It is not easy to know individual people well; it is less

easy to know interrelated systems of people—organizations—well; and, finally, fitting the one to

the other is perhaps most difficult of all. Nonetheless we need to hire, so the question becomes

how best to support the recruitment professionals facing this complex challenge.

Of course, hiring is not the only crucial organizational function; for that reason, it not

only needs to be done well but efficiently. However, the activities most commonly used in

recruitment and selection processes—resume review, phone screens, interviews, on-site visits,

and negotiation—are time-consuming. In part because it is time consuming, hiring is also costly,
RECRUITMENT, FAST AND EFFECTIVE 5

regardless of whether the process is done well or poorly. Blatter et al. (2012) estimates that the

cost of hiring skilled workers is the equivalent of 10 – 24 weeks of salary. And Fatemi (2016)

cites a variety of sources to suggest that costs of bad hires can range from a conservative figure

of 30 percent of the employee’s first-year earnings to hundreds of thousands of dollars per

employee. Given these figures, a process that can be more effective and efficient—delivering

candidates quickly who are most likely to perform well—seems to be desperately needed.

We might expect such an important and yet challenging context to engender a variety of

innovative techniques or experiments, as organizations look for better ways to hire well and

quickly, but a quick scan of the hiring landscape suggests otherwise: the process recommend by

the Society for Human Resource Management (“Hiring Policy and Procedures,” n.d.), for

example, essentially describes the basic procedure followed across all industry sectors: a resume

review succeeded by rounds of interviews and reference checks. And of these steps, the

technique perhaps most commonly identified with selection, the unstructured interview, is

considered to be notoriously bad at finding the most qualified applicants. Moore (2017) claims

that the interview is “terrible for predicting future job performance,” (p. 8) and Bohnet (2016)

states that it is “among the worst predictors of actual on-the-job performance” (para. 1). Why?

Because traditional interviews allow for our biases to interfere with our assessment of candidates

(Bohnet, 2016).

Fortunately, research has continued to be done on which hiring practices and assessments

are most effective at both predicting job performance and weeding out biases. In recent decades

researchers have begun to deploy meta-analytic techniques to look for patterns across what now

amounts to several decades worth of research (Schmidt & Hunter, 1998; Schmidt, Oh, & Shaffer

2016). These recruitment process meta-analyses show unambiguously which hiring techniques
RECRUITMENT, FAST AND EFFECTIVE 6

are most effective in finding people who will be successful in the job. In fact, their findings

suggest that a complete rethinking of the hiring process can be done, to focus it on the most

effective processes (Schmidt & Hunter, 1998; Schmidt et al., 2016). This redesigned process

promises to be both more effective overall, and, because it is simple, more efficient. It is,

therefore, a possible solution to the hiring challenge, in that it offers a simple model that

addresses effectiveness and speed.

The two goals of this paper are to: (1) to review the meta-analyses of hiring practice

research, and (2) to consider what it would mean to implement their findings in real-world hiring

processes. To do the latter, we describe a redesigned hiring process built around the tools

recommended by the meta-analyses, which we call a “toolkit,” and we reflect, from the

perspective of hiring practitioners, on what it would mean to put such a toolkit into place.
RECRUITMENT, FAST AND EFFECTIVE 7

CHAPTER 2

Literature Review

It is impossible within the scope of this paper to review all the literature relevant to the

application and evaluation of the primary psychometric assessments and selection procedures

used in recruiting processes. Fortunately, two excellent and topical meta-analyses, which are

themselves compilations of decades of research on recruitment, have recently emerged (Schmidt

& Hunter, 1998; Schmidt et al., 2016). They provide a way to quickly review a much broader

field of research in a paper of small scope, which is the goal in this literature review. Both meta-

analyses discuss three particular techniques in detail: structured interviews, tests of general

cognitive ability (GCA), and integrity tests. They also place heavy emphasis on job performance

as a criterion of value. We will begin, therefore, by defining these terms.

Job Performance

This paper assumes, with much of the literature, that the best way to measure selection is

with regard to how well the selected individuals perform in the job. There are other ways to

assess hiring—in terms of how well the individual selected fits the organization’s culture, for

instance. But because organizations exist primarily to do things, this paper assumes the benefit to

the organization of people who do the right things well is higher than any other measurable

hiring metric. Research on the effect of variation in job performance supports the importance of

emphasizing this criterion—Schmidt (2009) reports that in unskilled jobs top tier performers

were three times as productive as bottom tier performers, and that in skilled jobs top performers

were 15 times as productive. Given these figures, building a hiring process that selects with a

view to increasing job performance can be seen as a distinct competitive advantage.


RECRUITMENT, FAST AND EFFECTIVE 8

General Cognitive Ability

General cognitive ability (GCA), or “intelligence,” is a long-standing construct in

psychometric research and is consistently shown to predict job performance better than any other

metric (Schmidt, 2002). GCA even appears to be the underlying cause of the positive scores of

other highly-ranked assessments, such as job skills tests (Schmidt, 2002). How does GCA have

such an effect on job performance? Schmidt (2009) explains that GCA is best understood as the

“ability to learn,” and that people with higher GCA simply learn job-related knowledge faster (p.

4). This on-the-job learning appears to be crucial to success in every job, regardless of context or

complexity.

The use of GCA has one downside, however, known as “adverse impact.” This is the

disproportionately high or low performance on an assessment by different populations of people.

That minorities perform less well on intelligence tests is a commonly-known characteristic of the

genre, and is even seen in tests that attempt to use alternative formats in order to achieve fairness

(Hausdorf, LeBlanc, & Chawla, 2003). Schmidt (2002) suggests the fault does not lie in the

tools: they are rather a kind of unbiased lens revealing the effects of broader structural inequality

in society at large. Byington and Felps (2010) see larger, “self-fulfilling” effects playing across

the ecosystems of school and work in a vicious cycle that spans generations: assessments of

intelligence of different kinds limit the allocation of educational resources, these limited

resources influence who gets the best jobs, and those in the best jobs send their children to the

best schools. It seems reasonable to assume with Byington and Felps (2010) and Schmidt (2002)

that larger, societal forces are involved, yet we still need to hire. Hausdorf, LeBlanc, and Chawla

(2003) summarize the core tension here--to use intelligence tests causes adverse effects, but not

to use intelligence tests reduces the effectiveness of selection. The question becomes how to
RECRUITMENT, FAST AND EFFECTIVE 9

make use of the predictive validity of GCA while mitigating its social consequences. There is

some research to suggest that adverse impact can be mitigated by using particular interpretive

methods, such as, for example, viewing results in the context of the norms of a given population

group (Schmidt, 1985); this paper will explore another approach, which will be discussed in the

next section.

Integrity

Another way to mitigate adverse impact is by complementing GCA assessment with tests

that have no adverse effect, such as tests of integrity (Schmidt, 2002), which measure skills that

show equal distribution among all demographics (Ones, Viswesvaran, & Schmidt, 1993).

Already widely used in selection processes (U.S. Office of Technology Assessment, 1990),

integrity tests attempt to predict whether test-takers are likely to participate in Counterproductive

Work Behaviors (CWB) and have gained in popularity since the early 1980s (Ones et al., 1993).

They fall into two categories: “overt” and “personality” (Sacket et al, 1989). The first category

asks direct questions about the test-taker’s opinions of CWB; the second category, also known as

“veiled purpose” tests (U.S. Office of Technology Assessment, 1990), attempts to measure test-

taker attitudes indirectly, using questions that don’t explicitly refer to CWB. Though some

researchers privilege one test style over the other (Frost & Rafilson, 1989), meta-analytical

studies suggest that both seem roughly equal in effectiveness (Ones et al., 1993). Importantly for

this paper, although they are designed to predict CWB, integrity tests have also been shown to

predict job performance (Barry, Sackett, & Wiemann, 2007).

There is some mystery about what integrity tests measure, but the consensus seems to be

that they allow us to perceive a complex combination of psychological components. Barry,

Sackett, and Wiemann (2007) discuss several ideas: integrity seems to be a compound trait that
RECRUITMENT, FAST AND EFFECTIVE 10

correlates strongly with three aspects of the Big Five personality traits—Conscientiousness,

Agreeableness, and Emotional Stability—but it also seems to draw on something else, a possible

sixth personality dimension called “Honesty-Humility,” or an emphasis on “self control.” These

personality dimensions help explain why integrity tests predict job performance—because the

traits also predict job performance. For example, reviewing a broad range of research, Barrick

and Mount (2009) find that Conscientiousness and Emotional Stability alone predict a

surprisingly comprehensive set of work-relevant indices: task performance, citizenship

behaviors, the absence of counter-productive work behaviors, teamwork and customer service,

longevity in the organization, low absenteeism, leadership, and regulatory compliance. They also

predicted motivation, organizational commitment, and dependability (Barrick & Mount, 2009).

Notably, integrity tests do not produce adverse impact (Ones et al., 1993), so including

them in a selection process with other selection criteria, such as intelligence tests, renders that

process more fair. Ones, Viswesvaran, & Schmidt (1993) describes a scenario in which a

combined selection process of intelligence tests and integrity tests resulted in an increase of

hiring minorities by 58.3% over selection by intelligence tests alone.

Structured Interviews

The structured interview technique differs significantly from tests of intelligence and

integrity. Rather than an assessment of a psychological or cognitive trait, it is a process that

governs the behavior of the interviewers themselves as they conduct the interview that forms the

heart of most hiring processes. Structured interview protocol asks the interviewers to basically do

four things: select their interview questions in advance; think about the range of responses they

might get and consider which kinds of answers might be more or less “right;” have the discipline

to consistently ask the same questions of each candidate in the same order; and collectively
RECRUITMENT, FAST AND EFFECTIVE 11

review candidate answers with respect to the rating schema they formulated in advance. Moore

(2017), Bohnet (2016), and Stevens (2009) offer summaries of standard structured interview

processes. Despite the structured interview’s simplicity, the process seems to be seldom used,

which Bohnet (2016) chalks up to managers’ tendency to think highly of their own intuitive

judgement, and to distrust machine-like processes.

What is noticeable about this process is how much it resembles aspects of qualitative and

quantitative research—coding system, question design, consistent application, interrater

reliability, etc. This may suggest why it is successful. If we are asking the same questions of

everyone, in a consistent way, we begin to accrue the benefits of a “researcher’s stance:” we

extract to some degree our own biases and contextual artifacts and see better where people are

similar and where they divulge (Bohnet, 2016). The differences we perceive are more about

meaningful differences among the individuals themselves than about bias-filled data gathered

from an unstructured process. Stevens (2009) supports this point in distinguishing between the

decision-making style of unstructured and structured interviews. Unstructured interviews take

advantage of a “rapid, largely unconscious, implicit approach” that requires less effort but is

open to more bias, whereas structured interviews aim to support a “slower, deliberative,

controlled approach” which allows “conscious” weighing of options (Stevens, 2009, p. 42).

Meta-analysis 1

Schmidt and Hunter (1998) report on an impressive meta-analysis of decades of research

on assessments used in hiring processes. They generate a list of 19 assessment techniques

common in hiring and sort them by their ability to predict job success (Schmidt & Hunter, 1998).

The results show a clear distinction between assessments that will help us find productive

employees and assessments that will not help at all (Schmidt & Hunter, 1998). The motivation
RECRUITMENT, FAST AND EFFECTIVE 12

for this analysis is that such a vast difference in predictive validity in assessment techniques can

be converted into huge savings in hiring process cost and dramatic improvement in work

performance and organizational effectiveness, should the more effective options be used and the

less effective options curtailed (Schmidt & Hunter, 1998).

The top three assessments appropriate to entry-level selection are tests of general

cognitive ability (GCA), with a predictive validity of .51; structured interviews, also at .51; and

integrity tests at .41 (Schmidt & Hunter, 1998). Some things, conversely, are shown to be

extremely poor predictors of job performance: years of education (.10); interests (.10), and

graphology (.02) (Schmidt & Hunter, 1998). A key finding worth particular note is that

combining GCA and other tests can create even higher predictive validity. GCA combined with

integrity tests nets a .65 validity, and GCA combined with structured interviews earns .63

(Schmidt & Hunter, 1998). The significance of these last figures is really hard to overemphasize:

that we can use two assessments and achieve nearly 70% confidence that we are hiring the right

person is phenomenal. Such numbers cannot fail to impress any professional in the hiring or

talent management fields. These findings are relevant in several ways. They underscore the

importance of tests of general cognitive ability and of tests of integrity, and they add the deep

statistical support only a meta-analysis of similar magnitude can offer. The remarkable predictive

validity of combined assessments helps make the case that a few well-chosen assessments really

can form the backbone of a viable and useful recruitment process.

Meta-analysis 2

Schmidt, Oh, and Shaffer (2016) reconsider the meta-analysis of Schmidt and Hunter

(1998), add data from the intervening years, apply new statistical techniques, expand their list of

techniques measured to 31, and generate a new list of assessments used in hiring processes
RECRUITMENT, FAST AND EFFECTIVE 13

ranked by their ability to predict job success. The new data and statistical techniques Schmidt,

Oh, and Shaffer (2016) draw on result in changes in validity ratings of most of the assessments

measured. The top techniques are now found to be even more predictive: GCA tests (.65),

integrity tests (.46), and interviews (.58) (Schmidt et al., 2016). Among the new weak predictors

are age (0), person-organization fit (.13), and extraversion (.09) (Schmidt et al., 2016). Again, the

authors pair techniques and consider whether that increases the overall predictive validity, and

here the changes are striking. A GCA test paired with an integrity test now earns a .78 validity,

and a structured interview paired with a GCA test earns a .76 (Schmidt et al., 2016). It’s worth

noting that in situations where job performance can be measured objectively, GCA has an even

higher predictive validity: .84 (Schmidt, 2009). So, the shockingly predictive pairings noted in

the first article are even more shocking in this the second: combining the top two assessments

now generates close to an 80% chance of selecting the best candidate. Almost 20 years of data do

not change the importance of integrity tests and tests of GCA for a hiring process that aims at

predictive validity with respect to job performance. The data also continue to support the idea

that the third possible component of my assessment “toolkit” should be structured interviews.

Wonderlic

Which actual, real-world tools would we put in our toolkit? With regards to intelligence

tests, there are numerous options available. Byington and Felps (2010) list four of the most

common: Raven's Progressive Matrices, the Armed Services Vocational Aptitude Battery,

Wechsler's Adult Intelligence Scale, and the Wonderlic Intelligence Test. Reynolds (1989) cites

the Wechsler scales, the Kaufman scales, and the Stanford-Binet Intelligence Scale, though it is

noted that Stanford-Binet is losing popularity. Schmidt singles out the Wonderlic assessment as

an example of intelligence tests (2009). Because any of these key tests have been used in
RECRUITMENT, FAST AND EFFECTIVE 14

research studies and in practice for decades, it seems they are almost equal as regards their

contribution to adding significant predictive validity any hiring toolkit. The question becomes

one of practical facility: which are the easiest to use, the quickest to take, or the easiest to

administer? And the result here is fairly unequivocal.

The most useful test for our purpose is the Wonderlic. At almost 100 years old, this

“classic of cognitive ability testing” has been administered in the hundreds of millions of times,

takes twelve minutes, and costs little (“Choose your Weapon: Intelligence Tests,” 2006). It

consists of 50 standardized questions with multiple choice answers, with questions increasing in

difficulty as you continue on in the test (Hovland & Wonderlic, 1939; Schmidt, 1985). A test-

taker’s score is determined by how many answers they manage correctly in twelve minutes, but

the test can be taken without time limitation (Hovland & Wonderlic, 1939). New hiring service

vendors make it easy to integrate into existing hiring systems (Wonderlic, n.d.). The Wonderlic

is perhaps most famous for its use in American professional football talent evaluation, where

Pitts and Evans (2018) continue to attest to its validity in predicting job performance, but it has

been a staple of research on intelligence testing for decades. Its results are comparable to tests

taking much more investment of time to take and administer (Dodrill, 1981). Dodrill (1981) finds

it useful, accurate, relatively unbiased, and easy to administer. Dodrill (1983) confirms that its

results to be stable over time. Dodrill and Warner (1988) argue for the Wonderlic’s effectiveness

as a measure of general intelligence, especially in conditions that require rapid diagnosis, such as

psychiatric evaluations. Schmidt (1985) finds it reliable, economical, and valid.

Stanton Survey

There are also numerous options for integrity tests. Ones et al. (1993) noted there were 43

available even as long as 30 years ago. Berry, Sackett, and Weimann (2007) list several popular
RECRUITMENT, FAST AND EFFECTIVE 15

options: for personality-focused tests of integrity, they list the Personnel Reaction Blank, the PDI

Employment Inventory (PDI-EI), and the Reliability Scale of the Hogan Personality Series. For

the overt tests, they note Personnel Selection Inventory (PSI), the Reid Report, and the Stanton

Survey. It is somewhat difficult to find comparisons of the relative strengths and designs of

popular tests, given that much of the literature on each test is only available from the testing

companies that provide the tests as a for-profit service. However, Wanek, Sackett, and Ones

(2003) conducted a particularly painstaking item-level analysis of 7 popular tests. Tests were

considered with respect to four general dimensions of integrity—"antisocial behavior,

socialization, positive outlook, and orderliness/diligence” —and 23 more specific “thematic

composites” or core ideas, such as “theft admissions,” “social conformity,” and “risk taking”

(Wantek, Scakett, & Ones, 2003, pp. 878-885). Their analysis suggests very fine distinctions

between individual tests and between overt or personality-focused integrity test genres, but it

also confirms that there is much correspondence in general among tests and between them and

the core personality factors they are thought to measure: Conscientiousness, Emotional Stability,

and Agreeableness (Wanek et al., 2003).

Selecting among the most popular of the intelligence tests would seem to be, therefore,

similar to selecting among intelligence tests. Any of the seven measured by Wanek et al. (2003)

could be expected to bring a relatively equivalent predictivity to the hiring process. As a

consequence, we have to choose on two criteria: whether we prefer overt or personality-based

tests, and ease of use. To the first question, the overt-style integrity test is considered preferable

in this paper because it makes its motivation obvious, and transparency of motivation is in line

with the ethics of the Organizational Development profession (“Organizational Development

Code,” n.d.). To the second question, an informal review of the recommended overt-style
RECRUITMENT, FAST AND EFFECTIVE 16

integrity test providers suggested the Stanton Survey to be an easy online experience for an HR

professional to adopt. The Stanton Survey uses 74 dichotomous (either/or) and multiple-choice

questions about an applicant’s attitudes towards counterproductive work behaviors to generate a

determination of the level of risk the organization will take on in employing the individual—low,

marginal, or high (Harris, 1985). It has been used for over 50 years (“Stanton Survey 11.0,” n.d.)

and takes 9-15 minutes to complete (Plotkin Group, n.d.). Chockalingam and Ones (1997) found

the assessment reliable, to have “substantial utility,” and to have no adverse impact. Grimsley

(1987) also found it reliable. With respect to the analysis by Wanek, Sackett, and Ones (2003),

the Stanton Survey correlated more highly with the four principle components than did the other

tested overt-style tests, and correlated highly with virtually all 23 thematic composites,

suggesting it is a broad, comprehensive, and fairly balanced assessment of integrity. Harris

(1985) considers the Stanton Survey be reliable, valid, and useful, and to assess honesty in a

multidimensional way.
RECRUITMENT, FAST AND EFFECTIVE 17

CHAPTER 3

Discussion

Description of Toolkit and Its Application

Having reviewed the recent meta-analyses of hiring practice research and related

literature, it now remains to consider what it would mean to implement the findings from the

literature in real-world hiring processes. To do this, this paper proposes the formation of a

hypothetical recruitment “toolkit.” the ingredients of our proposed recruitment toolkit are as

follows: the Wonderlic test of general cognitive ability; the Stanton Survey test of integrity; and

a standard structured interview protocol.

With respect to the application and integration of these tools, the tests of cognitive ability

and integrity are generally now online; our process will call for them to be taken by individual

applicants on their own time, so one remarkable consequence to our process is that any time

these tests take to be administered is largely external to it. What will occupy most of the time of

hiring professionals using our toolkit will be the design and implementation of structured

interviews. However, these time-consuming interviews will be limited in scope: we only need to

interview those candidates who do well in the online tests of general cognitive ability and

integrity. Because these two tests have a combined .78 predictive validity, they let us know as

well as is humanly possible that the top scorers will all be able to do the job. Therefore, we can

pick a very few finalists, rest assured that any will perform well, and rely on the structured

interviews to allow us to identify whatever individual differences we might want to use to further

distinguish among them. The Appendix contains a simple 4-step workflow for applying this

toolkit.
RECRUITMENT, FAST AND EFFECTIVE 18

It is important to note that this process excludes certain features of traditional hiring

processes, in ways that may appear to be radical departures from the norm. For example, it does

not include resume-reviewing or phone-screening steps, because what these steps assess--work

experience, age, education, graphic design skills, etc., are shown by our meta-analyses to add no

predictive validity of note. Interested applicants can simply apply and take the online

assessments before hiring professionals even need to engage them: the time taken to review

resumes, conduct phone screens, and do initial interviews can therefore be removed from the

process. If a review of a candidate’s background is important to the organization, it can be

included in later steps after the initial assessments, and thus cost significantly less, because it will

be conducted on fewer people.

Cost Savings

It is worth taking a moment to consider the relative costs of the toolkit: these are difficult

to calculate precisely, because volume and organization size factor into the costs of intelligence

and integrity tests, and because test providers do not make these costs easy to discover, but it is

safe to say that the toolkit will cost significantly less than conventional search processes. A first-

pass review of the online material provided by the vendors of intelligence and integrity tests

suggests we can expect per-applicant costs to be in the range of $10 per assessment. This can

seem like a lot in a job search with hundreds of candidates, but it is important to remember that

the largest cost category in any job search is staff time: the amount of time HR and other

employees spend reviewing resumes, sitting in interviews, scheduling meetings, following up

with emails, calling references, and so on. A simple calculation can show the time savings the

toolkit proposes.
RECRUITMENT, FAST AND EFFECTIVE 19

For argumentation purposes, consider a vastly oversimplified hiring process comprised of

four basic steps and 100 applicants to be considered. A first step, resume review, would take 10

hours, if each resume were considered for 6 minutes. A phone screening second step might take

another 15 hours, if we assumed 30 candidates were screened for 30 minutes each. A third step,

phone interviews, would take 11.25 hours if 15 candidates were each interviewed for 45 minutes.

And a final, on-site interview step would take the most time of all. If five candidates were

interviewed for 2 hours each by 6 staff members total, that would require 60 hours of staff time.

In sum this simple process would require 96.25 hours of staff time, even leaving out the

necessary additional administrative overhead of defining job descriptions, scheduling phone

calls, arranging interviews, and so forth. In contrast, the toolkit proposed in this paper would

require only the time necessary to set up and conduct structured interviews. With its higher

predictively validity, fewer final interviews would be needed: two candidates interviewed by 6

staff each for two hours would require 24 hours of staff time, or around 25% of the time invested

in the traditional hiring process.

Likely Users

Next, we address the question of what kinds of organizations would be most interested in

using our toolkit. In principle any organization might use it: certainly, the structured interview

format is available to any organization, and the tests of general cognitive ability and integrity are

not prohibitively expensive or logistically difficult to implement. Some characteristics appear to

recommend some organizations as prime targets, and you might think of these as making part of

the “mindset” of the organization. We suspect this toolkit will appeal to organizations that are

dissatisfied with the quality of recruits provided by their hiring processes; to organizations that

wish to make their hiring processes more efficient; to organizations willing to experiment to
RECRUITMENT, FAST AND EFFECTIVE 20

gather data on new ways of hiring; to organizations, such as start-ups, that have no established

system in place; and, finally, to organizations that value the use of processes validated by

copious research over an intuitive sense of how things ought to be done. What might be as

important as these organizational preconditions are the stances of the staff overseeing

recruitment: HR or recruitment officers new in their positions with a mandate for change are

likely looking for the equivalent of this toolkit and may therefore be open to it.

Experimental Frame

Psychological or emotional preparedness matters to the implementation of our toolkit

because our toolkit effectively eliminates or ignores a variety of established techniques; it will

therefore undoubtedly meet resistance on a many levels. First, people who feel their current

methods work well will likely be uninterested in changing those, unless they happen to be

naturally inclined to constantly explore improvement areas across their areas of responsibility, on

the model of constant “slack search” (March, 1994). On a second level, people may feel insecure

about using such a pared-down process--simplicity itself brings a certain risk--if it does not

work, they will naturally expect to bear the consequences of that failure, all the more visible for

the simplicity of process that generated it. Finally, the epistemological assumptions of the toolkit

may also raise issues—it assumes that hiring processes should be based on research-proven

techniques, that it is appropriate to use theory to improve practice, and that it is appropriate for

organizations to learn. HR staff unfamiliar with academic research may be intimidated by its

methods, wary to allow a precedent in which theory is given some influence over their workflow,

and may dislike the personalities and approach of “know it all” researchers.

For all these reasons, I think the best approach for implementing this toolkit is to think of

its use as a trial or experiment: a small, unthreatening proof-of-concept exercise designed not to
RECRUITMENT, FAST AND EFFECTIVE 21

change everything but simply to allow organizations to learn more about a potential new

technique. My recommendation is therefore to invite organizations to use this approach in small

“batches.” Organizations might use the toolkit for one or two searches, for instance, and keep

careful notes about how it works, what kinds of candidates it delivers, and whether or not those

candidates succeed. It is true that to do this suggests the organization may need to build ways to

“observe” and measure the hiring process that often do not exist for their traditional recruitment

processes, but that kind of reflexivity is a good thing. Another way to constrain the process and

make it psychologically safe for the organization is to design it as an out-sourced test: a

consulting firm might be hired to manage the process; or a consortium of businesses might

partner for the purpose; in this way there is some “loose linkage” (March, 1994) between the

radical implications of the process and the organizational disruption they might suggest, that

gives those implications space to be understood in their own right. Finally, a third way to ease

into the use of the toolkit is to use it not to assess new applicants, but rather to assess current

employees. Existing employees are in effect a record of past hiring processes. We can invite

them to participate in our toolkit and compare the ranking it makes of them with our own data on

their performance in our organization to see if our toolkit predicts actual performance differences

in our existing staff.

Diversity and Transformation

One consequence of the hiring toolkit proposed here is that it is likely to deliver non-

traditional candidates. Because it is not using the normal processes to screen candidates—phone

screens, resume searches, traditional interviews—and because it specifically works to eliminate

existing biases in selection, this toolkit ought to take seriously candidates who would not be

ranked highly by traditional searches. It ought to return candidates who have high general
RECRUITMENT, FAST AND EFFECTIVE 22

cognitive ability, high integrity, and who compare favorable in structured interviews, without

regard for their job experience, age, personal grooming, beauty, person-organization fit,

extroversion, or any of the normal conscious and unconscious criteria used in established hiring

processes. This is to be seen as an advantage, in that the toolkit should help to reduce inevitable

and invisible built-in biases about the kinds of people who “belong” in a given organization that

might be held by the hiring officers. Indeed, it should deliver on the important, but long dormant

promise hiring has always held for supporting innovation in a company by bringing people in

with slightly different perspectives (March, 1994). It should be noted that another consequence

of this may be organizational disruption. These competent, engaged staff with ideas of their own

may actually begin to innovate and challenge the organization to improve in ways it does not

normally expect from new hires.

On a related note, Schmidt (2002) takes up the question of whether there is any point at

which a given person is too intelligent for a job. Can you be such a good learner that you actually

perform less well? His answer is no: that statistically speaking, there is no end to the direct

relationship between GCA and job performance. As GCA goes up, so does performance.

Schmidt (2002) sees two reasons for this, and both are interesting: first, a person high on GCA

will not select a job they are not interested in (they are capable of making sophisticated

assessments of what they can do and where they belong); and second, they will tend to remake

the job in their image, regardless of how it is conceived originally (Schmidt, 2002).

Automation, Gamification, and Pre-selection

One interesting characteristic shared by the three selection methods used in our toolkit is

that all three can be used in a simple automated, computer-based process. Intelligence and

integrity tests draw on fixed questions that ask for machine-scorable answers, and even
RECRUITMENT, FAST AND EFFECTIVE 23

structured interviews could be “computerized,” precisely because they are structured—the same

questions are asked of each candidate. As a result, it is easy to conceive of an automated hiring

toolkit that might exist online or in an app on a mobile platform. Interview questions could be

programmed in advance, and candidates could then be directed to the automated kit without any

of the normal preliminaries. Hiring professionals might not need to even engage with any

applicants until they had been preselected by the computerized process. It is assumed that most

companies would like to meet potential new team members in person before hiring them, but

given the predictive validity of our hiring techniques, a case could be made for automatically

inviting the best candidates to simply report for work.

Another interesting possibility resulting from this easy-computerization is to embed the

hiring techniques in automated engagement systems that use a non-traditional interface or

design. The hiring process might not need to seem like a hiring process at all: it might take the

form of a game, or of conversations with an AI-programmed “bot,” etc. This possibility holds out

the potential to make the process more engaging or enjoyable for the applicant. Seseri (2018)

describes a few up-start companies trying to do something similar.

Finally, it is not a far leap to advance from the idea of automating selection for one

potential employer to thinking about automating selection for multiple employers

simultaneously. Because GCA and integrity predict success in any job, an applicant could be, in

theory, “pre-approved” for a variety of jobs by one centralized, computerized process, just as

mortgage applicants are now for home loans. Indeed, applicants might go through the screening

process and then receive several immediate job offers from participating companies. It is outside

of the scope of this paper to speculate further on the possibility of automated hiring or pre-

selection processes except to note that the potential here seems very real.
RECRUITMENT, FAST AND EFFECTIVE 24

Conclusion

This paper attempts to review relevant aspects of the literature on hiring processes, with a

view to understanding which aspects of recruitment are the most predictive of job performance.

The research is demonstrably, even astonishingly, unambiguous, prioritizing three key processes.

It is a logical next step to imagine a toolkit built of those processes. This paper imagines such a

toolkit and discusses the ramifications of using it, including dipping into the hidden depths of the

hiring professional’s mind. It is hoped that this paper will find its way into the hands of hiring

managers interested in processes that are more effective, cost less, and take less time, and that it

might serve as a kind of road map as they think about ways to improve what they do. It is also

hoped that organizations using this process will find themselves in the delightful discomfort of

being transformed from within by an influx of hyper-qualified, diverse staff.

Future Research

The most compelling area of future research relates to practical applications of this

toolkit. Real-world trials of hiring processes built around intelligence, integrity, and structured

interviews are needed, as well as shared data about their costs, affordances, and outcomes.

Longitudinal research tracking employees hired through these and traditional processes would be

especially helpful, to determine if performance were indeed correlated to our criteria. It would be

ideal if this research were undertaken in the relatively transparent context of professional and

academic research. There is perhaps some reason to fear it may already be serving to inform new

products and services from proprietary vendors of hiring services, which are sadly inscrutable

from the point of view of research.

A second area of future research is more provocative yet: and that is to determine

whether there are still more effective ways to assess those qualities of mind that predict job
RECRUITMENT, FAST AND EFFECTIVE 25

performance than tests of general cognitive ability and integrity. One particularly interesting area

emerges from the tradition of cognitive developmental theory. This line of research describes

thinking in terms of stages of hierarchical complexity (Commons, Miller, Ramakrishnan, & Giri,

2018); assessment by these stages has been shown in early studies to be predictive of job success

in diverse categories, and the assessment appears to have minimal adverse impact (Commons et

al., 2018). Featherston et al. (2016) seems to be among the first of what ought to be a trend of

studies researching the relationship of measurements of the developmental complexity of a test-

taker’s thinking to the traditional construct of intelligence.


RECRUITMENT, FAST AND EFFECTIVE 26

References

Barrick, M. R., & Mount, Michael K. (2009). Select on conscientiousness and emotional

stability. In E. A. Locke (Ed.), Handbook of principles of organizational behavior, 19-40.

West Sussex, UK: John Wiley & Sons.

Berry, C. M., Sackett, P. R., & Wiemann, S. (2007). A review of recent developments in

integrity test research. Personnel Psychology, 60(2), 271-301.

Blatter, M., Muehlemann, S., & Schenker S. The costs of hiring skilled workers. European

Economic Review 56.1 (2012): 20-35.

Bohnet, I. (2016). How to take the bias out of interviews. Harvard Business Review Digital

Articles, 2–5. Retrieved from https://hbr.org/2016/04/how-to-take-the-bias-out-of-

interviews

Byington, E., & Felps, W. (2010). Why do IQ scores predict job performance? An alternative,

sociological explanation. Research in Organizational Behavior, 30, 175–202.

Campion, M. A., Pursell, E. D., & Brown, B. K. (1988). Structured interviewing: Raising the

psychometric properties of the employment interview. Personnel Psychology, 41(1), 25-

42.

Choose your Weapon: Intelligence Tests. (2006, August). Inc. Magazine. 96.

Commons, M. L., Miller, P. M., Ramakrishnan, S., & Giri, S. (2018). Employee management

using behavioral developmental theory. Behavioral Development, 23(1), 22.

Dodrill, C. B. (1981). An economical method for the evaluation of general intelligence in

adults. Journal of Consulting and Clinical Psychology, 49(5), 668–673.

Dodrill, C. B. (1983). Long-term reliability of the Wonderlic Personnel Test. Journal of

Consulting and Clinical Psychology, 51(2), 316–317.


RECRUITMENT, FAST AND EFFECTIVE 27

Dodrill, C. B., & Warner, M. H. (1988). Further studies of the Wonderlic Personnel Test as a

brief measure of intelligence. Journal of Consulting and Clinical Psychology, 56(1), 145–

147.

Fatemi, F. The true cost of a bad hire – It’s more than you think. (2016, September). Forbes.

Retrieved from https://www.forbes.com/sites/falonfatemi/2016/09/28/the-true-cost-of-a-

bad-hire-its-more-than-you-think/#5f8df2634aa4

Featherston, K. G., Chen, S. J., Toth-Gauthier, M., Day, J. M., Herman, P., Laverdeur, R., ... &

Tyberghein, M. (2016). An analysis of the Verbal Comprehension Index of the Wechsler

Adult Intelligence Scale—Fourth Edition (WAIS–IV) using the model of Hierarchical

Complexity (MHC): Why might stage be a better measure of “smarts” than verbal

IQ?. Behavioral Development Bulletin, 21(1), 50.

Frost, A. G., & Rafilson, F. M. (1989). Overt integrity tests versus personality-based measures of

delinquency: An empirical comparison. Journal of Business and Psychology, 3(3), 269-

277.

Grimsley, D. L. (1987). Test-retest reliability of the Stanton Survey honesty test. Psychological

Reports, 61(2), 675–678.

Society for Human Resource Management (n.d.).. Hiring Policy and Procedures. Retrieved from

https://www.shrm.org/resourcesandtools/tools-and-

samples/policies/pages/cms_001677.aspx

Harris, W. G. (1985). [Test review of the Stanton Survey]. J. V. Mitchell, Jr. (Ed.), The ninth

mental measurements yearbook. Lincoln, NE: Buros Center for Testing.


RECRUITMENT, FAST AND EFFECTIVE 28

Hovland, C. I., & Wonderlic, E. F. (1939). Prediction of industrial success from a standardized

interview. Journal of Applied Psychology, 23(5), 537–546.

Kegan, R. (2003). In over our heads: The mental demands of modern life. Cambridge, Mass:

Harvard University Press.

LeBlanc, P. A. H. M. M., & Chawla, A. (2003). Cognitive ability testing and employment

selection: does test content relate to adverse impact? Applied HRM Research, 7(2), 41-48.

Maeda, J. (2009). Academia vs. industry: the difference is in the punctuation marks. Harvard

Business Review. Retrieved from https://hbr.org/2009/03/academia-vs-industry-the-diffe

March, J. G., & Heath, C. (1994). A primer decision making: How decisions happen. New York,

N.Y: Free Press.

McGrew, K. S., & Flanagan, D. P. (1998). The intelligence test desk reference (ITDR): Gf-Gc

cross-battery assessment. Boston: Allyn and Bacon.

Moore, D. A. (2017). How to improve the accuracy and reduce the cost of personnel

selection. California Management Review, 60(1), 8-17.

Nunnally, J. C., & Bernstein, I. H. (1994). Psychometric theory (McGraw-Hill series in

psychology) (Vol. 3). New York: McGraw-Hill.

The Organization Development Institute (n.d.) International organizational development code of

ethics. Retrieved from http://www.theodinstitute.org/od-library/code_of_ethics.htm

Ones, D. S., Viswesvaran, C., & Schmidt, F. L. (1993). Comprehensive meta-analysis of

integrity test validities: Findings and implications for personnel selection and theories of

job performance. Journal of Applied Psychology, 78(4), 679–703


RECRUITMENT, FAST AND EFFECTIVE 29

Pitts, J. D., & Evans, B. (2018). Evidence on the importance of cognitive ability tests for NFL

quarterbacks: what are the relationships among Wonderlic scores, draft positions and

NFL performance outcomes? Applied Economics, 50(27), 2957-2966.

Reynolds, C. R. (1998). Fundamentals of measurement and assessment in psychology

intelligence tests. In A. S. Bellack & M. Herson (Eds.), Comprehensive Clinical

Psychology (Vol. 4 pp. 3-55). Retrieved from https://doi.org/10.1016/B0080-

4270(73)00002-X.

Sackett, P. R., Burris, L. R., & Callahan, C. (1989). Integrity testing for personnel selection: An

update. Personnel Psychology, 42(3), 491-529.

Schmidt, F. L. (1985). [Test review of Wonderlic Personnel Test]. J. V. Mitchell, Jr. (Ed.), The

ninth mental measurements yearbook. Lincoln, NE: Buros Center for Testing.

Schmidt, F. L. (2002). The role of general cognitive ability and job performance: Why there

cannot be a debate. Human Performance, 15(1-2), 187-210.

Schmidt, F. L., & Hunter, J. E. (1998). The validity and utility of selection methods in personnel

psychology: Practical and theoretical implications of 85 years of research

findings. Psychological Bulletin, 124(2), 262–274.

Schmidt, F. L., & Hunter, J. E. (2009). Select on intelligence. In E. A. Locke (Ed.), Handbook of

principles of organizational behavior, (pp. 3-18). West Sussex, UK: John Wiley & Sons.

Schmidt, F. L., Oh, I. S., & Shaffer, J. A. (2016). The validity and utility of selection methods in

personnel psychology: Practical and theoretical implications of 100 years of research

findings. Unpublished manuscript. Retrieved from:

https://www.researchgate.net/publication/309203898
RECRUITMENT, FAST AND EFFECTIVE 30

Seseri, R. (2018, January 29). How AI is changing the game for recruiting. Forbes. Retrieved

from https://www.forbes.com/sites/valleyvoices/2018/01/29/how-ai-is-changing-the-

game-for-recruiting/#6a9639c71aa2

Stanton Survey 11.0. (n.d.) Retrieved from https://www.stantonsurvey.com/

Plotkin Group (n.d.). Stanton survey: a trusted & proven integrity screening tool. Retrieved from

https://www.plotkingroup.com/services/hiring-assessment/integrity-survey/

Stevens, C. K., (2009). Structure interviews to recruit and hire the best people. In E. A. Locke,

(Ed.), Handbook of principles of organizational behavior, (pp. 41-56). West Sussex, UK:

John Wiley & Sons.

US Congress, Office of Technology Assessment. (1990). The use of integrity tests for pre‐

employment screening (Report No: OTA-SET-442). Washington, D.C.: US Congress,

Office of Technology Assessment. Retrieved from:

https://digital.library.unt.edu/ark:/67531/metadc39672/m2/1/high_res_d/9042.pdf

Van Iddekinge, C. H., Roth, P. L., Raymark, P. H., & Odle-Dusseau, H. N. (2012). The criterion-

related validity of integrity tests: An updated meta-analysis. Journal of Applied

Psychology, 97(3), 499–530

Viswesvaran, C., & Ones, D. S. (1997). Review of the Stanton survey. Security Journal, 8(1-2),

167-169.

Wanek, J. E., Sackett, P. R., & Ones, D. S. (2003). Towards an understanding of integrity test

similarities and differences: An item-level analysis of seven tests. Personnel

Psychology, 56(4), 873–894.

Wonderlic. (n.d.). Retrieved from https://www.wonderlic.com/

J. V. Mitchell, Jr. (Ed.), The ninth mental measurements yearbook. 1985.


RECRUITMENT, FAST AND EFFECTIVE 31

Appendix A

Hiring Toolkit Timeline

1. HR Preparation

a. HR staff negotiate contracts with assessment providers, connect their applicant

management systems to the providers, and confirm the tests work and the results

are returned effectively.

b. HR staff instruct hiring managers in the new process.

c. HR staff design instructions for applicants.

2. Individual Search Preparation

a. For each search, HR staff and hiring managers or other stakeholders review the

structured interview process, design questions (or select them from previous

hiring processes), consider the kind of answers they will return, and collectively

settle on the ways they will rank those answers.

b. HR and hiring managers consider any particular demographic they want to

prioritize in their hiring for diversity and inclusion or other purposes.

3. Hiring Process Begun: Testing

a. After job posting, applicants are automatically informed about the process, given

background on the assessments and interview technique to be used, and invited to

take the online tests of general cognitive ability and integrity.

b. General cognitive ability and integrity test results are received and sorted;

candidates at the top of the list, considered in conjunction with demographic and

inclusivity criteria, are passed onward for interviews.

4. Structured Interviews
RECRUITMENT, FAST AND EFFECTIVE 32

a. Structured interviews are organized and held. Hiring committees review their

findings and prioritize the candidates they have interviewed. The finalist is

offered the job.

View publication stats

You might also like