CHAPsdffMAN Et Al-2005-Persdfsonnel Sdfpsychology

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 30

PERSONNEL PSYCHOLOGY

2005, 58, 673702

DEVELOPING A NOMOLOGICAL NETWORK


FOR INTERVIEW STRUCTURE: ANTECEDENTS
AND CONSEQUENCES OF THE STRUCTURED
SELECTION INTERVIEW
DEREK S. CHAPMAN
Department of Psychology
University of Calgary
DAVID I. ZWEIG
Department of Management
University of Toronto

A review by Campion, Palmer, and Campion (1997) identified 15 elements of interview structure and made predictions regarding how
applicants and interviewers might react to these elements. In this 2sample field survey of 812 interviewees and 592 interviewers from over
502 organizations, interview structure was best described by 4 dimensions: (a) Questioning Consistency, (b) Evaluation Standardization, (c)
Question Sophistication, and (d) Rapport Building. Interviewers with
formal training and those with a selection rather than recruiting focus
employed higher levels of interview structure. In addition, reactions to
increased structure were mixed. Both higher structure (Question Sophistication) and lower structure (Rapport Building) were positively related
to interviewer reactions. Less than 34% of interviewers had any formal
interview training. However, interviewers were confident that they could
identify the best candidates regardless of the amount of interview structure employed. Applicants reacted negatively to the increased perceived
difficulty of structured interviews, but perceptions of procedural justice
were not affected by interview structure.

Despite the widespread acceptance of the employment interview in


personnel selection practice, we know surprisingly little about how interviews are typically conducted, how interviewers are trained, whether
interviewer training influences the way interviews are conducted, or how
interviewers and applicants react to the way that interviews are carried
out (Campion, Palmer, & Campion, 1997; Palmer, Campion, & Green,
1999). Employment interviews have been variously described based on
We would like to thank Bruce Lumsden and staff of the Cooperative Education Department at the University of Waterloo for access to the samples obtained in this research. We
would also like to thank Jane Webster for her assistance and support.
Correspondence and requests for reprints should be addressed to Derek S. Chapman,
Department of Psychology, University of Calgary, 2500 University Dr. N.W, Calgary,
Alberta T2N 1N4, Canada; dchapman@ucalgary.ca.
C 2005 BLACKWELL PUBLISHING, INC.
COPYRIGHT 

673

674

PERSONNEL PSYCHOLOGY

the location of the interview (e.g., campus interview), the purpose of the
interview (recruiting or selection focus), the medium used to conduct the
interview (telephone, videoconference), the number of interviewers (e.g.,
panel interviews), and by the amount of structure in the interview (structured, semi-structured, or unstructured). It is this last concept, interview
structure that is the focus of this article.
This research responds to calls for an empirical examination of the
nomological network of interview structure (Campion et al., 1997; Harris,
1999; McDaniel, Whetzel, Schmidt, & Maurer, 1994) and to calls for investigating how applicants react to various components of interview structure
(Campion et al., 1997; Gilliland & Steiner, 1999). Although most recent reviews of the interview structure literature contend that interview structure
is best measured as a continuous and multifaceted construct (see Campion
et al., 1997), no standard measure of interview structure as a continuous
and multifaceted construct exists to describe the level of structure employed in interviews. Accordingly, our primary goal was to identify the
factor structure of interview structure practices and to develop a measure
of interview structure that can be used in selection research. In doing so,
we also examined the antecedents of interview structure by investigating the relationships among formal interviewer training, interviewers use
of interview structure, focus of the interview (recruitment or selection),
and perceived efficacy of interview functions. Finally, we investigated
how interviewers and applicants react to specific elements of interview
structure as suggested by Campion et al. (1997) and Gilliland and Steiner
(1999).
This research was conducted in two stages spanning 3 years. With
Sample A, we explored the factor structure of interview structure practice as well as focusing on applicant reactions to those factors. Although
some studies have investigated the dimensionality of interview structure
with students (e.g., Hysong & Dipboye, 1999) and through an analysis
of legal cases (e.g., Gollub-Williamson, Campion, Malos, Roehling &
Campion, 1997), this is the first investigation that examines how real
interviewers adopt structure when engaging in actual job interviews.
Additional data gathered from Sample B expands on the nomological
network of interview structure discovered in Sample A by addressing
questions about interviewer training and interviewer reactions to structured
interviews.
What Is Interview Structure?

Perhaps one of the most pervasive themes in describing interviews has


been the concept of interview structure. Huffcutt and Arthur (1994) defined interview structure as the reduction in procedural variance across

CHAPMAN AND ZWEIG

675

applicants, which can translate into the degree of discretion that an interviewer is allowed in conducting the interview (p.186). Most researchers
agree that increasing interview structure can improve the psychometric
properties of interviews (Campion, Pursell & Brown, 1988) as well as enhance their predictive validity (McDaniel, Whetzel, Schmidt, & Maurer,
1994; Wiesner & Cronshaw, 1988). However, it is unclear what is meant
by interview structure (e.g., Hakel, 1989). Until recently, interview researchers have tended to classify interviews dichotomously, as being either structured or unstructured. Kohn and Dipboye (1998) went further to
suggest that interviews could also be categorized as semi-structured if they
employed some structure elements but not all. Huffcutt and Arthur (1994)
made an excellent argument for considering interview structure as both
a multifaceted and continuous construct. Although their structure classification system was a vast improvement over the structured/unstructured
dichotomy, it continued to be a single variable consisting of four levels
of structure. What is evident from Campion et al.s seminal article, and
those on which it was based, is that one could easily replace the term
structured interview with good interview or a valid interview. That is,
Campion et al.s description of structure components consists of a list of
interview practices that have been associated with higher predictive validities. Although very helpful for practitioners who wish to improve the
predictive validity of their interviewers, Campion et al.s description does
not offer interview researchers a measure to assess the level of interview
structure.
A review of the literature suggests several approaches to developing
a nomological network for the structured interview; however, there is little empirical evidence on which to base a solid prediction. Based on an
extensive review of the interview literature, Campion et al. (1997), proposed that their 15 areas of interview structure could be categorized into
two factors (interview content and evaluating applicants). Hysong and
Dipboye (1998) proposed a very plausible three-factor model of structure
consisting of (a) job relatedness of questions, (b) question standardization,
and (c) applicant voice, but were unable to test whether these factors were
independent. Huffcutt and Arthur (1994) contributed an interesting and
detailed coding procedure based on varying levels of two dimensions: the
standardization of (a) interview questions and (b) response scoring. Ultimately, they collapsed this framework into a single dimension of interview
structure consisting of four possible levels. Huffcutt, Conway, Roth, and
Stone (2001) also collapsed their four-level structure framework into two
to investigate constructs assessed in employment interviews. Gilliland and
Sterner (1999) suggested that there might be eight interview factors relevant to applicant reactions. Again, none of these frameworks have been
empirically tested. Hysong and Dipboye (1999) conducted one of the few

676

PERSONNEL PSYCHOLOGY

empirical studies investigating the dimensions of interview structure, in


which they found six factors that could be used to describe structured interviews. However, their factor structure was obtained from the perspective
of the applicant rather than the interviewer and was based on how positively or negatively students rated the desirability of a comprehensive list
of individual structure practices. Unfortunately, we know little about how
much structure makes an interview a structured interview. If structure
is indeed multidimensional and continuous, there are potentially many
patterns of structure practices that could have varying effects on applicant
and interviewer impressions. It is possible that the pattern of structure is
more important than the average or overall level of structure employed.
Before this question can be addressed, we first need to understand what
factors underlie interview practices.
In the absence of a dominant model for the dimensionality of interview
structure practices, it is possible that any of the above proposed factor
models (e.g., Campion et al.s two-factor or 15 facets model, Gilliland
and Steiners (1999) eight-factor model, Huffcutt et als four-level, twodimensional model, or Hysong and Dipboyes six-factor model) may
emerge. As such, we propose the following as an exploratory research
question:
Research Question 1: What is the underlying factor structure of employment interview practices?

Antecedents of Interviewers Use of Interview Structure

The second goal of this investigation was to understand what conditions lead interviewers to employ various interview structure elements.
We examined several possible antecedents of interview structure that we
detail next.
Interviewer training. Most interview practitioners and researchers
agree that formal interviewer training is essential to successful recruiting
and selection practices. Researchers argue that formal training can be
used to improve a variety of interviewer tasks including establishing valid
criteria for job analysis, evaluating candidates more effectively (e.g., Day
& Sulsky, 1995; Dougherty, Ebert, & Callender, 1986), improving rapport
(Gatewood, Lahiff, Deter, & Hargrove, 1989), and improving the recruiting
function of the interview (Chapman & Rowe, 2002; Rynes, 1989).
Despite the widespread endorsement of interviewer training, the extent
to which interviewers are receiving training and whether this training is
effective remains an area that has been neglected in the literature (Palmer
et al. 1999). Indeed, the existing research has been mixed regarding the efficacy of training in improving interviewing rater effectiveness (see Palmer

CHAPMAN AND ZWEIG

677

et al., 1999; Posthuma, Morgeson, & Campion, 2002, for detailed reviews).
Furthermore, little attention has been paid to what elements of interviewer
training might lead to increased adherence to interview structure (Palmer
et al., 1999; Schmitt, 1999). Training has sometimes been conceptualized
as a component of interview structure (e.g., Campion et al., 1997). However, as our approach was to examine the factor structure of interviewer
practices, we classified interviewer training as an antecedent to structure
rather than an activity carried out as a part of the interview. Most interview researchers agree, however, that interviewer training ought to lead to
greater use of interview structure (e.g., Campion et al., 1997) and that formal training is thought to be the most popular way to inform interviewers
about the advantages of adopting a more controlled structured interview
(e.g., Dipboye, 1992). As such, we hypothesize that:
Hypothesis 1: Formally trained interviewers are more likely to structure
their interviews than untrained interviewers.

Interview focus. Rynes (1989) highlighted the duality of the employment interview as both a selection device and a recruiting tool. Classifying this dual purpose of employment interviews as interview focus, Rynes
also noted that there might be differences in the extent to which interviewers focus on the selection function or the recruiting function. Several
researchers have argued that these roles may conflict in that a greater emphasis on selection could reduce the attractiveness of the organization and
a greater emphasis on recruiting could reduce the validity of selection
decisions (Chapman & Rowe, 2002; Harris, 1999; Hysong & Dipboye,
1998). Barber, Hollenbeck, Tower, and Phillips (1994) manipulated interview focus so that applicants received either a recruitment interview or
one that combined recruitment and selection elements. They found that
student applicants for a part-time research assistant position reacted more
favorably to a combined recruitment/selection interview than those who
received only the recruitment portion of their interview.
Despite the dual function of the employment interview, researchers
have largely concentrated on how to improve the selection side, and as a
result, most suggestions for structured interviews are targeted at improving selection (Campion et al., 1997; Posthuma et al., 2002). It follows that
interviewers who are primarily interested in identifying suitable candidates (selection focus) would benefit most from employing higher levels
of interview structure. Accordingly, it is likely that interview focus will
predict interviewers use of the structured interview and we hypothesize
that:
Hypothesis 2: Interviewers with a selection focus will employ more highly
structured interviews than those with a recruitment focus.

678

PERSONNEL PSYCHOLOGY

Interviewer Reactions to Interview Structure

Not surprisingly, most researchers favor the use of highly structured


interviews due to their enhanced psychometric properties (e.g., Campion
et al. 1988; Conway, Jako, & Goodman, 1995), and superior predictive validity (e.g., McDaniel et al., 1994; Wiesner & Cronshaw, 1988).
Notwithstanding the widespread endorsement of structured interviews by
academics, many practitioners have resisted adopting structured practices
for a variety of reasons (Chapman & Rowe, 2002; Hysong & Dipboye,
1998; Latham & Finegan, 1993; Schmitt, 1999). Despite the importance
of this topic, we know little about how interviewers perceive their highly
structured interviews. One exception (Latham & Finnegan, 1993) suggested that interviewers held negative attitudes toward highly structured
interviews. Schmitt (1999) proposed that following detailed guidelines
might lead interviewers to perceive a loss of control over the interview
and selection decision, which, in turn, might lead them to resist employing
higher levels of interview structure. Others have also predicted that interviewers would react negatively only to some elements of interview structure (Campion et al. 1997; Gilliland & Steiner, 1999). Interviewers must
not only have more positive affective reactions to structured interviews
but also believe that their interviews are efficacious for the recruitment
and selection functions if they are to continue using them. It has been suggested that the vast majority of interview structuring practices are designed
to increase predictive validity rather than improve the effectiveness of the
recruiting function of the interview (e.g., Campion et al., 1997; Chapman,
Uggerslev, & Webster, 2003). Thus, trained interviewers should be expected to receive the bulk of their training in practices to enhance selection,
possibly to the detriment of recruiting. Determining whether interviewers
react more positively based on the level of interview structure and whether
interview structure influences the perceived efficacy of the interview for
recruiting and selection would help us understand the potential sources of
resistance to structure. Accordingly, we hypothesize that:
Hypothesis 3a: Interviewers will report more positive affective reactions
to their interviews when they are less structured.
Hypothesis 3b: Interviewers employing higher levels of interview structure
will perceive their interviews to be (i) more efficacious for selection and (ii)
less efficacious for recruiting than those employing lower levels of structure.

Beyond an examination of reactions to interview structure as a whole,


it is important to identify whether there are specific structural components that affect the interviewers perceptions of their interviewing process.
Therefore, in addition to testing interviewer reactions to overall interview
structure, we also provide direct tests of each of Campion et al.s (1997)

CHAPMAN AND ZWEIG

679

predictions. However, for brevitys sake, we refer readers to Campion et al.


for the full rationale for their predictions.

Applicant Reactions to Interview Structure

Interest in how applicants react to selection procedures and perceptions of procedural justice in particular has been growing rapidly in I-O
psychology (see Gilliland, 1993; Gilliland & Steiner, 1999). Negative applicant reactions have been predicted to cause premature withdrawal from
selection procedures, negative public relations for organizations, reduced
attractiveness of organizations, and rejection of job offers (e.g., Chapman
et al., 2003; Gilliland, 1993; Smither, Reilly, Millsap, Pearlman, & Stoffey,
1993; Truxillo, Bauer, Campion, & Paronto, 2002). We suggest that the
dual roles of selection and recruitment (Rynes, 1989) make the employment interview particularly susceptible to negative applicant reactions.
Furthermore, the intense interpersonal interaction that occurs in employment interviews and the increased opportunities to make mistakes that
might offend applicants makes applicant reactions a vital consideration
for interviewers.
It has been proposed that applicants might view elements of interview
structure as being more fair (Campion et al., 1997; Gilliland & Steiner,
1999), and there are compelling reasons why this might be so. The consistency of questioning and rating individuals should reassure applicants
that they are being treated equally and fairly (Gilliland & Steiner, 1999).
One might expect that being treated equally would lead to more favorable
impressions of the organization (Gilliland, 1993). However, with the exception of one study in which interview structure did not predict attraction
to the firm or positive recruiter perceptions (Turban & Doherty, 1992), the
empirical evidence collected on applicant reactions to interview structure
suggests that applicants tend to react negatively to highly structured interviews (Chapman & Rowe, 2002; Hysong & Dipboye, 1998; Latham
& Finnegan, 1993). One explanation for these negative reactions is that
applicants are motivated to present themselves favorably to interviewers
and that highly structured interviews reduce their ability to manage their
impressions (Chapman & Rowe, 2002; Posthuma et al., 2002).
Most studies finding negative applicant reactions have measured applicant perceptions of procedural justice only. It is possible that factors other
than procedural justice reactions might be of more importance for organizational attractiveness (e.g., Gilliland & Steiner, 1999; Ryan & Ployhart,
2000). Furthermore, as noted earlier, others have suggested that applicants
might react to individual elements of structured interviews rather than to
overall structure (Campion et al., 1997; Gilliland & Steiner, 1999).

680

PERSONNEL PSYCHOLOGY

Accordingly, there are two goals related to applicant reactions that


are addressed in the present study. The first is to empirically test how
applicants react to interview structure factors derived from interviewer
behaviors and the second is to explore applicant reactions using variables
other than procedural justice. Again, given the large number of predictions
and the limited space available, the reader is referred to the original articles
for a more detailed explanation of individual hypotheses. As such, we
hypothesize that:
Hypothesis 4: Applicants will react negatively to higher levels of interview
structure. Specifically, applicants will (i) experience more negative perceptions of procedural justice, (ii) perceive greater difficulty with the interview,
and (iii) express lower intentions to accept a job offer.

Method

The data for Sample A of this study were part of a larger study involving
2300 employers conducting interviews for 4-month cooperative education
work terms at a large North American university campus over a 2-year
period. Data were ultimately collected from two samples (A and B) as
described below
Participants

Two samples were used to conduct the analyses. Results for Sample A
were drawn from interviews conducted by 1,500 employers with approximately 4,000 applicants over a 3-week period. After removing interviews
conducted by telephone and videoconference, our final sample consisted
of data from 812 applicants (mean age = 20.57 years, SD = 1.88; 55.4%
men) who engaged in face-to-face interviews conducted by 428 interviewers from 338 organizations representing a wide variety of industries (28%
response rate for interviewers). Unfortunately, exact participation rates are
unavailable for the applicants. However, feedback from research assistants
collecting the data indicated a high response rate (approximately 80%). In
exchange for their participation, interviewers were promised a synopsis of
the results when the study was completed. Men conducted 58.7% of the
interviews, women conducted 19.6% of the interviews, and 21.3% of the
interviews were conducted by both men and women.
Participants from Sample B were recruited approximately 1 year later
in the same manner as Sample A participants. Furthermore, Sample B
respondents were instructed not to complete the survey if they had participated in the earlier study. Sample B included 164 interviewers from
a population of 1,000 organizations approached (16.4% response rate)
during their campus recruiting activities. The interviewers represented

CHAPMAN AND ZWEIG

681

a wide variety of industries and came from a variety of functions and


positions within their companies including: managers (41%), technical
personnel (32.3%), HR representatives (11.8%), and others (6.2%). Some
interviewers did not complete the position item (8.2%). Men conducted
51% of the interviews, women conducted 24.8% of the interviews, and
7.3% were conducted with both men and women present (15.5% did not
respond to the gender item). Company size varied considerably, from small
businesses with fewer than 10 employees to large multinational organizations with tens of thousands of employees (M = 6888 employees, SD =
17,830). The average interview length was reported to be 30.57 minutes
(SD = 10.14).
Measures

Questionnaires were completed by both interviewers and their applicants in Sample A immediately following the interview and by interviewers only in Sample B. The specific measures given to each are detailed
below.
Interviewer

(a) Structure: For Sample A, a 16-item measure of interview structure (see


Table 1) was created based on Campion et al.s (1997) description of
interview structure elements. Interviewers responded to each item by
indicating how frequently they used each structure element in their
interviews on a 7-point scale ranging from 1 = never to 7 = always.
They were also asked to indicate the length of their interview in minutes. Information on the number of interviewers was obtained from
applicant data and matched with interviewer ratings of the degree of
structure in their interviews. The 16-item measure from Sample A
was expanded to 36 items for Sample B (see Table 2).
(b) Training: Interviewers in both samples were asked to indicate whether
they had participated in formal training on how to conduct interviews (coded 1 = yes and 1 = no = 0). Those who responded yes
were asked to complete a 15-item questionnaire indicating what was
covered in their training (see Table 3). Interviewers responded to
whether each element was either included in their training (coded
1) or not included in their training (coded 0). These training
components were also based on interview training elements described
in the narrative by Campion et al. (1997). Finally, interviewers were
also asked about the number of hours of interviewer training they
received.

Items not used


I look at other information such as resumes, grades or test scores prior to the
interview
Number of interviewers (obtained from applicants)
Interview length in minutes
I take detailed notes of specific behaviors and statements made by the applicant
during the interview
I discourage the applicant from asking questions during the interview or only
allow their questions at the end of the interview

I use anchored rating scales to evaluate the candidates response to each question
Decisions about the applicant are made by combining scores statistically, rather
than making a global impression of their attractiveness
Each answer is rated against an ideal response
I rate the applicant on several dimensions independently (e.g., communication
skills, thinking skills).
I use behavioral questions designed to get applicants to relate specific
accomplishments to the requirements of the job
I use hypothetical or situational questions
I follow up answers with probing questions
I prompt candidates and allow them to elaborate on their answers
I ask the same questions to every candidate
The same interviewer asks the questions to each of the applicants
Questions are linked to a job description

Items

0.84
10.55
1.96
1.10

1.51

1.93
1.29
1.11
1.33
2.10
1.27

3.79
5.08
5.65
5.47
5.23
5.12

1.45
30.46
4.19

1.83

4.53

1.27

1.83
1.52

3.44
5.46

6.31

1.72
1.85

SD

2.30
2.95

Mean

.27
.20

.28

.52

.52
.30

.84
.68

Evaluation
Standardization

TABLE 1
Sample A: Item Loadings on Three Factors of Interview Structure

.62
.61
.46
.23

.63

.21
.46

.14
.21

Question
Sophistication

.52
.50
.36

.14

.46

.20
.26

Question
Consistency

682
PERSONNEL PSYCHOLOGY

2.27
2.00
1.67
1.44
1.88
1.74
1.72
1.39
1.62
2.05
2.00
1.45
1.96
1.31
1.98
1.49
1.95
1.59

3.18
2.92
2.94
4.36
3.73
4.56
4.55
4.09
5.06
4.82
3.44
5.23
3.31
5.14
5.10
5.43
3.95
5.28

(R) indicates item is reverse coded.

2.01
1.63

2.89
2.10

I use a formal rating system that I apply to each candidate


I use anchored rating scales to evaluate the candidates response to each
question
I score the interview numerically
Decisions about the applicant are made by combining scores statistically,
rather than making a global impression of their attractiveness
Each answer is rated against an ideal response
I make my decisions based on gut feelings about the candidates (R)
I use hypothetical or situational questions
I ask questions about how a candidate would go about performing a task
I use behavioral questions designed to get applicants to relate specific
accomplishments to the requirements of the job
I keep my questions general rather than overly specific (R)
My questions are consistent across candidates
I have a list of questions I ask every candidate
I ask questions in the same order to every candidate
I ask the same questions to every candidate
Questions are tailored to each candidate (R)
Questions are linked to a job description
The same interviewer(s) asks the questions to each of the applicants
I ask questions to get to know the candidate as a person
I ask the candidates personal questions (about hobbies etc.)
I begin the interview with light conversation

SD

Mean

Item

.47
.44

.74
.65

.82
.76

Evaluation
Standardization

.25

.91
.64
.47

Question
Sophistication

TABLE 2
Sample B: Estimated Item Loadings from CFA on Four Factors of Interview Structure

.83
.80
.80
.78
.67
.38
.32

Question
Consistency

1.35
.27
.15

Rapport
Building

CHAPMAN AND ZWEIG


683

684

PERSONNEL PSYCHOLOGY
TABLE 3
Training Content for Interviewers with Formal Interview Training
% included

Content area
Job requirements for the position(s) being filled
Background and purpose of the interview
Legal issues
How to write interview questions
How to evaluate answers
Rapport Building
How to make decisions from interview data
How to select questions/probes from a question bank
Practice role playing
How to use questions that were prepared previously
Note taking
Job analysis
Recruiting the candidate (promoting the organization)
How to avoid rating errors
Realistic job previews
Videotaping role playing with feedback
a
b

Sample Aa

Sample Bb

90.8
88.7
82.7
80.6
79.3
79.1
77.9
75.5
73.2
72.1
68.1
65.0
63.6
50.4
47.3
29.9

86.7
95.6
82.2
86.7
82.2
84.4
84.4
66.7
86.7
75.0
71.1
67.4
62.2
55.6
57.8
40.0

Only 32% reported any formal training.


Only 28% reported any formal training.

(c) Interview focus: Due to questionnaire space limitations for both samples, a single item ranging from 1 to 7 was used to capture the extent to
which interviewers described the purpose of their interview as being
either predominantly recruiting/attraction 1 to predominantly screening/selection 7.
(d) Interviewer reactions: A single item was used in Samples A and B
to measure interviewers affective reactions to conducting the interview (rated 1 to 7 with 7 indicating a more positive reaction). Two
items measured the interviewers perceived efficacy of their interviews for (a) the ability to identify the best candidates for the position
and (b) the ability to attract applicants to work for their organization
(rated from 1 to 7; Sample B only).
Applicant

Two questionnaires were completed by applicants in Sample A, one


immediately before their interview (demographic information), and one
immediately after the interview assessing the following variables:
(a) Procedural justice: Participants responded to five items based on
Gillilands model of procedural justice in personnel selection (e.g.,

CHAPMAN AND ZWEIG

685

Gilliland, 1993; 1995). A sample item was, The interviewer questions were relevant to the job and was rated on a 7-point scale where
1 = strongly disagree to 7 = strongly agree ( = .80).
(b) Perceived difficulty: Participants were asked to indicate how difficult the interview was on eight items (e.g., I had difficulty coming
up with good answers to the interviewers questions) using a 7point scale ranging from 1 = strongly disagree to 7 = strongly agree
( = .85).
(c) Post-interview intentions: Participants were asked to indicate their
likelihood of accepting an offer based on an item from Powell and
Goulet (1996) ranging from 0% to 100%.
Procedure

The data were collected from interviews conducted to select applicants


for 4 month, full-time positions in a wide variety of industries and job titles. All of the positions required some level of university education and
the majority involved engineering and computer science. For Samples A
and B, a cover letter requesting participation and a two-page questionnaire
containing the interviewer measures described above were inserted into
interviewers information packages when they arrived on campus. Interviewers were asked to deposit completed questionnaires into a collection
box prominently displayed in the lounge provided to interviewers.
Applicants in Sample A were approached immediately preceding their
interviews and asked to complete the short pre interview questionnaire.
They were asked to return following their interviews to complete a second,
longer questionnaire. Later, the applicant and interviewer data (Sample A)
were matched based on the identifying information provided by the university with permission from both applicants and interviewers.
Analyses

Our hypotheses and research questions were tested across two samples.
In order to examine the factor structure of employment interview practices
(R1), a principal axis factor analysis with oblique rotation was conducted
on the 16 items related to interview structure from Sample A. Sample B
data were used to conduct a confirmatory factor analysis (CFA) based on
the exploratory findings from Sample A. Hypothesis 1 was tested by conducting multivariate GLM analyses examining mean differences in structure levels between interviewers who reported formal training and those
who did not for both Samples A and B. Hypothesis 2 was tested in both
samples by examining zero-order correlations between interview focus
(recruiting vs. selection) and the interview structure factors. Hypothesis 3

686

PERSONNEL PSYCHOLOGY

was tested in Sample A by regressing interviewer affective reactions on the


structure factors obtained from Sample A. Hypothesis 3 was also tested
similarly in Sample B but with the addition of a fourth structure factor
and two additional interviewer reaction variables (selection and recruiting efficacy). Hypothesis 4 was tested in Sample A by regressing each of
the three applicant reaction variables on the obtained interview structure
factors.
Results

The results of the principal axis factor analysis for R1 initially extracted
five factors with eigenvalues greater than 1 from the 16 items administered
in Sample A. However, a combination of examining the scree test, the
percentage of variability explained by the factor, and assessing whether
the factors could be interpreted meaningfully suggested that three factors
would best describe the amount of structure employed by an interviewer.
Accordingly, a principal axis factor analysis with oblique rotation was
conducted forcing three factors (see Table 1). This yielded a satisfactory
and interpretable set of factors labeled: Evaluation Standardization, Question Sophistication, and Question Consistency. This solution explained
43.45% of the variability in the interview structure items. The first factor,
Evaluation Standardization, contained four items measuring the extent to
which the interviewer uses standardized and numeric scoring procedures
( = .71). The second factor, Question Sophistication, consisted of three
items that measure the extent to which the interviewer used question
content that corresponded to formats known to be more valid and recommended by researchers such as job-related behavioral questions and
situational questions ( = .67). The third factor, Question Consistency,
consisted of three items related to asking the same questions, in the same
order, to every candidate ( = .45). Five items that did not load on any
of the three factors were discarded. The scales were computed using a
unit weighting of the items loading on these factors and were used for the
remaining analyses based on Sample A.
In order to improve the psychometric properties of these scales and to
expand the interviewer behaviors to include those designed to establish
rapport with the applicant, 20 additional items were added for examination in Sample B. Items were developed for each of the three factors from
Sample A and a set of items were developed to explore a potential fourth
factor examining interview content that was not job related such as asking
questions about an applicants hobbies or making light conversation. This
practice is frequently espoused in the popular interviewing literature in order to relax applicants and establish rapport. It also reflects Rynes (1989)
dual nature of the interview by including recruiting-oriented behaviors.

CHAPMAN AND ZWEIG

687

However, we expected that this factor would be negatively related to the


other factors of interview structure as it contravenes Huffcutt and Arthurs
(1994) definition of higher structure in that rapport building opens up
the interview to greater variability of topics discussed across applicants.
Furthermore, researchers have suggested that engaging in non-job-related
conversation could bias interviewers, introduce contamination, and fail
to restrict the applicants ability to control ancillary information in the
interview (e.g., Campion et al., 1997).
A CFA was conducted using the three obtained factors from Sample
A and a fourth factor that was expected based on the addition of rapportbuilding items. Using procedures recommended by Kline (1988), we first
removed items that exhibited significant cross loadings and those that did
not load cleanly on one of the four expected factors. In order to be consistent with Sample A, which permitted oblique factors, we allowed the
four expected factors to correlate as this is expected with interviewers employing overall higher or lower levels of structure. The results of the CFA
demonstrated that four separate factors adequately represented the underlying relationships among the exogenous variables ( 2 (165) = 252.23,
p < .01; CFI = .91, GFI = .88, TLI = .90, RMSEA = .06) and suggest a
good fit for the four-factor model.1 Items that loaded poorly on their factors
were removed to improve the reliability of the final scales. The final scales
included the three factors obtained from Sample A: Question Consistency
(seven items, = .84), Evaluation Standardization (6 items, = .81),
Question Sophistication (four items, = .66), and added Rapport Building as a fourth factor (three items = .50). These lower reliabilities
obtained for Question Sophistication and Rapport Building are perhaps
not surprising. For example, some interviewers may use only one method
of questioning (e.g., behavioral questions but not situational questions)
or may only practice one method of establishing rapport. Despite the low
reliabilities of these two factors, they had interesting relationships with
several antecedents and consequences examined in the study. Thus, we
chose to retain these factors for additional (albeit weaker) analyses.
The scales were then computed using the average of unit-weighted
items for each of the four factors. These four factors were then used
in the remaining analyses involving Sample B. Standardized item loadings on the four factors obtained from the CFA are provided in Table 2.
As expected, the CFA confirmed that there were significant correlations
among the structure factors. Specifically, the strongest relationship was
1
At the suggestion of an anonymous reviewer we also compared this solution to a onefactor model and two-factor models based on Campion et al. (1997) and Huffcutt and Arthur
(1994). The four-factor model proved to be a better fit to the data than these alternative
models based on significant 2 differences.

688

PERSONNEL PSYCHOLOGY

found between Evaluation Standardization and Questioning Consistency


(r = .51, p < .01) whereas smaller significant relationships were found
between Rapport Building and Evaluation Standardization (r = .24, p
< .05), Evaluation Standardization and Question Sophistication (r = .21,
p < .05), and Question Consistency and Question Sophistication (r = .16,
p < .05).
Individual item statistics for the interview structure and outcome variables and correlations among these variables assessed in Sample A are provided in Table 4. Descriptive statistics and zero-order correlations among
the variables in Sample B are detailed in Table 5.
Antecedents to Using Structured Interviews

Training. Only 34% of the interviewers in Sample A and 28% of interviewers in Sample B reported having any formal interview training. For
those who completed some formal training, Table 3 reports the percentage of interviewers who received training on specific training elements
for each sample. The average training time for these interviewers was
4.31 hours.
Hypothesis 1 predicted that formally trained interviewers would be
more likely to structure their interviews. In order to test this hypothesis in Sample A, a 2 3 mixed model GLM test with type III sums of
squares was conducted to assess whether formal training is related to the
overall use of structure or if formal training predicts the use of certain
elements of interview structure (2 levels of interview training were tested
as a between-subjects effect, and the three interview structure elements
were tested as within-subject effects for Sample A). The same analysis was
replicated in Sample B except that there were four within-subject structure
elements examined in a 2 4 mixed model GLM analysis. As predicted in
Hypothesis 1, interviewers who received formal training were more likely
to use higher levels of structure in their interviews. Interestingly, this was
not uniformly true across individual structure factors. For Sample A, the
multivariate test of the relationship between interview structure and formal training showed a significant main effect of interviewer training using
Pillais Trace statistic F(2, 354) = 249.36, p < .001, 2 = .59. However,
there was a significant interaction between structure factor and interviewer
training, which suggested that training did not have a uniform effect on the
structure levels for all structure factors (Pillais Trace (2, 354) = 17.38,
p < .001, 2 = .09). Figure 1 shows a significant increase in structure levels
for both Evaluation Standardization and Question Sophistication but no
significant increase in Questioning Consistency due to formal interviewer
training. The results of the multivariate analysis for Sample B were similar
to those in Sample A. Training had a main effect on the level of structure

1.53
30.46
4.71
.31
3.67
4.77
5.27
5.77
5.90
3.11
74.06

.74
10.55
1.90
.46
1.21
1.12
1.11
1.08
.61
1.12
22.37

SD

N
144
273
337
357
363
363
363
357
144
143
143
.14
.00
.12
.20
.04
.03
.17
.16
.04
.00

.16
.19
.19
.25
.05
.13
.08
.08
.16

.09
.14
.00
.14
.10
.06
.11
.10

.35
.36
.05
.14
.06
.14
.02

Coded 1 = men and 2 = women and excludes interviews conducted by multiple interviewers.
17 where 1 = total recruiting focus and 7 = total selection focus.
c
Coded: 0 = no, 1 = yes.

p < .05; p < .01.

1. Interviewer gendera
2. Length of interview (minutes)
3. Interview focusb
4. Formal trainingc
5. Evaluation Standardization
6. Question Sophistication
7. Question Consistency
8. Interviewer affective reaction
9. Procedural justice
10. Perceived difficulty
11. Acceptance intentions
.44
.38
.11
.12
.15
.05

.17
.23
.01
.19
.15

TABLE 4
Descriptive Statistics and Correlations Among Variables for Sample A

.04
.11
.13
.12

.13
.12
.04

.20
.24

.22

10

11

CHAPMAN AND ZWEIG


689

6888
30.57
.28
4.44
4.78
2.94
4.19
4.89
5.63
5.81
5.20

SD
17,830
10.14
.45
2.17
1.27
1.33
1.18
1.19
1.12
.76
1.18

N
140
160
160
158
161
161
161
161
161
161
160
.05
.07
.14
.16
.16
.12
.02
.14
.08
.12

.20
.04
.02
.10
.23
.07
.16
.10
.18

Coded: 0 = no, 1 = yes.


17 where 1 = total recruiting focus and 7 = total selection focus.

p < .05; p < .01.

1. Number of employees
2. Length of interview (minutes)
3. Formal traininga
4. Interview focusb
5. Question Consistency
6. Evaluation Standardization
7. Question Sophistication
8. Rapport Building
9. Interviewer affective reaction
10. Selection efficacy
11. Recruiting efficacy
.12
.13
.20
.32
.00
.10
.08
.18

.10
.17
.08
.00
.02
.18
.13

.44
.13
.07
.02
.15
.13

.26
.09
.09
.11
.10

TABLE 5
Descriptive Statistics and Correlations Among Variables for Sample B

.00
.20
.13
.19

.17
.04
.20

.27
.27

.13

10

11

690
PERSONNEL PSYCHOLOGY

7
6
5
4
3
2
1
0

691

Untrained

Question
Sophistication

Evaluation
Standardization

Trained

Question
Consistency

Degree of Structure

CHAPMAN AND ZWEIG

Structure Factors

Figure 1: Extent of Interview Structure Use as a Function of Formal


Training in Sample A.

used (Pillais Trace (3, 156) = 82.60, p < .001, 2 = .61). Furthermore,
as in Sample A, there was a significant interaction between structure factor and training on level of structure used (Pillais Trace (3, 156) = 3.16,
p < .05, 2 = .06). There was a significant increase in the use of Evaluation
Standardization, Question Sophistication, and Questioning Consistency;
however, the amount of Rapport Building was not affected by formal training (one would expect lower levels of Rapport Building to be associated
with training as this is consistent with higher structure). Thus, Hypothesis 1
was largely supported across both samples.
Interview focus. An examination of the zero-order correlations between interview focus and interview structure elements shows that interview focus was associated with the level of structure used in the employment interview. Small but significant relationships were found between
interview focus and Evaluation Standardization (r = .17, p < .05) and
Question Consistency (r = .14, p < .05) such that a greater selection focus
was associated with higher levels of these structure factors but Question
Sophistication was unaffected by interview focus. In Sample B, a similar pattern emerged with the size of the correlations being similar for
Evaluation Standardization (r = .17, p < .05) and Question Consistency
(r = .10, ns) whereas neither Question Sophistication nor Rapport Building appeared to be affected by interview focus. Thus, some support was
found for Hypothesis 2.
Interviewer Reactions to Structure Elements

Interviewer affective reactions. To examine Hypothesis 3a, which


predicted that interviewers would report more positive reactions to less

692

PERSONNEL PSYCHOLOGY

structured interviews, we regressed interviewer affective reactions on the


three interview structure elements entered as a block for Sample A. Of
the three factors, only Question Sophistication was a significant predictor of interviewer affective reactions but in the opposite direction of what
was predicted ( = .23 p < .05) such that higher levels of Question
Sophistication were associated with more positive interviewer reactions.
For Sample B, the three interviewer reaction variables were each regressed
separately on the four interview structure factors. Interviewer affective
reactions were significantly predicted by interview structure elements
(R = .27, p < .05), although this relationship was driven predominantly
by Rapport Building ( = .17, p < .05) and, consistent with the findings
from Sample A, Question Sophistication ( = .19, p < .05).
Perceived selection and recruiting efficacy. Contrary to Hypothesis
3b(i), perceived selection efficacy was not significantly predicted by the
four structure elements (R = .19, ns). Interviewers perceived recruiting
efficacy was significantly predicted by the level of interview structure
(R = .30, p < .05) although the results provided mixed support for
Hypothesis 3b(ii). A lower level of structure (i.e., higher Rapport
Building) was positively related to perceived recruiting efficacy ( = .21,
p < .05) and higher structure in the form of Question Sophistication was
also perceived to increase recruiting efficacy ( = .17, p < .05). Surprisingly, interviewers who reported having a stronger selection focus actually
had lower perceptions of the selection efficacy of their interviews (r =
.17, p < .05). Furthermore, reporting a stronger recruiting focus did not
detract from interviewers perceptions of the efficacy of their interviews
for selecting the best applicants.
Interviewer training and perceived efficacy. Interestingly, trained interviewers in Sample B perceived their interviews (on a seven-point scale)
as being more efficacious for attracting employees (M = 5.57) than untrained interviewers (M = 5.03; t = 2.56, p < .05). Surprisingly, no
differences were found in the perceived efficacy of the selection function
based on receiving formal interview training.
Testing Campion et al.s (1997) predictions for interviewer reactions.
Campion et al. have made specific predictions regarding reactions to
15 elements of interview structure. Given the scope of the present study and
the large numbers of predictions proposed by Campion et al. and Gilliland
and Steiner (1999), we chose to focus primarily on the results arising from
factors identified in this study that may present a more parsimonious solution than testing each small facet of the structure. However, as we had
constructed items that directly measure these facets, we felt it would be
useful to supply the reader with the results of these finer grained predictions
made by Campion et al. (see Table 6), which are largely consistent with

7. No questions from candidate

5. Longer interview
6. Control ancillary info

4. Better questions

Content
1. Job analysis
2. Same questions
3. Limit prompting

From Campion et al. (1997)

Questions are linked to a job description


I ask the same questions to every candidate
(i) I prompt candidates and allow them to elaborate
on their answers
(ii) I follow up answers with probing questions
(i) I use hypothetical or situational questions
(ii) I use behavioral questions designed to get
applicants to relate specific accomplishments to
the requirements of the job
Interview length in minutes
I look at other information such as resumes, grades,
or test scores prior to the interview
I discourage the applicant from asking questions
during the interview or only allow their questions
at the end of the interview

Items developed for this study


and completed by interviewers
.11
.08
.08
.01
.03
.02

.08
.06
.17

0
0

PJ

+
0

.12

+
+

+
0

Predicted

.11

.04
.04

.10
.12
.11

.06
.04
.07

Observed

Interviewer reactions

.17
.07

.14
.05
.10

.01
.11
.12

PI

Candidate reactionsa

TABLE 6
Results of Predictions of Structure Element Effects for Candidate and Interviewer Reactions from Campion et al. (1997) with Items
Developed for This Study (Sample A Only)

CHAPMAN AND ZWEIG


693

.13

.07

+
0

.21

.01

.08

.06

.09
.07

.20
.06

.11

.07

.10

PI

.10

.11

.02

PJ

Candidate reactionsa

I use anchored rating scales to evaluate the


candidates response to each question
I take detailed notes of specific behaviors and
statements made by the applicant during the
interview
Number of interviewers (obtained from applicants)
The same interviewer asks the questions to each of
the applicants
I discuss applicants with other interviewers between
interviews (leave blank if you interview alone)
Have you participated in formal training on how to
conduct interviews?
Decisions about the applicant are made by
combining scores statistically, rather than making
a global impression of their attractiveness

P
0

Items developed for this study


and completed by interviewers

Each answer is rated against an ideal response

P = Prediction by Campion et al. (1997).


np = No prediction made.
PJ = Procedural justice.
PI = Post-interview intentions.

p < .05.

15. Statistical prediction

13. No discussion between


interviewers
14. Training

11. Multiple interviewers


12. Same interviewer(s)

10. Detailed notes

Evaluation
8. Rate each answer on
multiple scales
9. Anchored rating scales

From Campion et al. (1997)

TABLE 6 (continued)

.05

np

.07

.03

.09

.04
.05

.15

np
np

.03

Observed

np

Predicted

Interviewer reactions

694
PERSONNEL PSYCHOLOGY

CHAPMAN AND ZWEIG

695

those of Gilliland and Steiner and cover the largest number of interview
structure elements.
As indicated in Table 6, four of the 13 predicted relationships between
structure elements and interviewer reactions were supported. For example,
significant relationships were found between the use of better questions
and positive interviewer reactions (r = .12, p < .05, r = .11, p < .05,
for situational and behavioral questions, respectively). Furthermore, using
anchored rating scales was related to more positive interviewer reactions
(r = .15, p < .05). However, contrary to Campion et al.s prediction,
discouraging applicant questions until the end of the interview was found
to be positively related to more positive interviewer reactions (r = .11,
p < .05).
Applicant Reactions

Three multiple regression analyses were conducted with procedural


justice, perceived difficulty, and acceptance intentions each regressed separately on the three interview structure factors determined in Sample A
(applicant reaction data were not collected in Sample B). The result for
each of these is reported below.
Procedural justice. Contrary to Hypothesis 4 (i), there was no evidence of a direct relationship between a linear combination of interview
structure elements and applicants perceived procedural justice (R = .15,
p > .05). Individual scale results also showed no relationships between
structure items and procedural justice perceptions (see Table 4).
Perceived difficulty. Consistent with Hypothesis 4 (ii), more highly
structured interviews were perceived to be more difficult by applicants
in Sample A (R = .25, p < .05). Individual structural elements were
less influential than a linear combination, although an examination of the
results indicates that Question Consistency and Question Sophistication
represented the substantial beta weights in the equation ( = .15, p < .05
and = .16, p < .05, respectively).
Acceptance intentions. When asked whether they intended to accept
a job offer from the company immediately following their interview, applicants in Sample A did not appear to weigh the three interview structure
factors measured (R = .19, p >.05). This was also true when looking at the
zero-order relationships between structure scales and applicant intentions
(Table 4).
Testing Campion et al.s (1997) predictions for applicant reactions.
Overall, 11 of Campion et al.s 15 applicant reactions predictions were
supported (see Table 6). Of note, applicants did not associate any structure components positively with procedural justice, but they viewed the
discouragement of questions (r = .17, p <.05) and having multiple in-

696

PERSONNEL PSYCHOLOGY

terviewers (r = .20, p < .05) as being less fair. Interestingly, post hoc
analyses revealed that applicants were less likely to accept an offer from
an organization when the interviewer had a selection focus than when a
recruiting focus was reported (r = .19, p < .05) or if the interviewer was
trained versus untrained (r = .10, p < .05).
Discussion

This investigation, comprising two separate samples, contributes to


our understanding of the nature, antecedents, and consequences of interview structure. Our analyses suggest four factors of interview structure
Evaluation Standardization, Question Consistency, Question Sophistication, and Rapport Building. Building on this initial investigation, researchers can adopt this four-factor framework to examine a wide range
of interview topics including issues related to predictive validity and psychometric properties of structured interviews, applicant reactions, and
recruiting success.
In examining the antecedents of interview structure, this study is the
first to establish an empirical link between formal interviewer training and
increasing interview structure elements. Specifically, trained interviewers
were more standardized and formalized in their evaluation processes, employed more sophisticated questioning strategies, and in one sample, were
more consistent in their questioning practices. Thus, we now have evidence
that providing formal training to interviewers may help them realize any
benefits of employing higher levels of structure. Our data also underscore
the need for interviewers to receive greater training in recruiting practices.
Two of the four least prevalent training practices reported by interviewers were the recruiting-related practices. This study also underscores the
disconnect between interview research, interviewer training, and interviewer beliefs about their interviews: Untrained interviewers conducting
unstructured interviews are brimming with confidence about their ability
to predict future job performance. This does not bode well for improving
the situation with respect to interviewer training. We need to do a better
job of communicating the benefits of interviewer training and increased
interview structure to organizations in general.
This study also clarifies some of the issues surrounding interviewer
acceptance of structured interviews. For example, Rapport Building,
higher levels of which are associated with less interview structure, is
viewed favorably, and Question Sophistication, which is typical of more
highly structured interviews, is also positively related to interviewer reactions. Thus, interviewers who are forced to adopt highly structured
interviews that do not permit Rapport Building may be particularly resistant to using them. Even trained interviewers, who seem to adhere to

CHAPMAN AND ZWEIG

697

researchers recommendations regarding Question Consistency, Evaluation Standardization, and Question Sophistication, simultaneously ignore
recommendations to reduce Rapport Building, which potentially contaminates an otherwise standardized procedure. This may be due to the interviewers awareness of the dual nature of the employment interview
and having strong beliefs that building rapport is necessary to both attract
the best applicants and to encourage them to be more open in providing
information that is more predictive of future performance.
In addition to reacting positively to the interview process, it is important
that interviewers perceive their structured processes as being efficacious
for selection and recruitment. Surprisingly, none of the four interview
structure factors predicted perceived selection efficacy. Perceived recruiting efficacy was predicted by the extent to which the interviewer engaged in
Rapport Building and higher levels of Question Sophistication. Although
building rapport has long been considered useful for recruiting purposes,
it is less clear why Question Sophistication might be associated with perceived recruiting efficacy. One possibility is that interviewers believe that
they will be seen as more professional by applicants if they employ more
sophistication in their questioning techniques. In other words, interviewers might believe their questioning techniques are a signal regarding the
professionalism of the organization as a whole (Rynes, Bretz, & Gerhart,
1991). Unfortunately, applicants did not share this belief as we found
that Question Sophistication was not related to applicant intentions of
accepting a job offer.
Contrary to previous empirical findings, interviewers did not perceive
that structure practices associated with selection validity detracted from
the recruiting efficacy of the interview, nor did they perceive that Rapport
Building detracted from selection efficacy. Overall, interviewers were very
confident that they could identify the best candidates regardless of the
amount of structure they employed.
Future research should investigate whether any of the individual structure factors contribute more to the predictive validity of interviews than
others and whether current approaches to examining the validity of popular interview structuring techniques (e.g., situational and behavioral interviews) confound the types of questions used with other important
structural factors. For example, it is possible that Evaluation Standardization may be responsible for differences in validity observed between
behavioral/situational and unstructured interviews. It would also be useful
to establish whether certain combinations of these structure factors enhance the predictive validity of employment interviews. For instance, it is
possible that Question Consistency and Evaluation Standardization may
be sufficient to generate higher interview validities. It is equally plausible to suggest that Question Sophistication alone is sufficient to enhance

698

PERSONNEL PSYCHOLOGY

structured interview validity. We also recommend that researchers test to


see whether Rapport Building interacts with Question Sophistication to
enhance the predictive validity of interviews by relaxing the candidate
and encouraging them to be more honest in their responses, or if establishing rapport injects non-job-related biasing information that detracts from
predictive validity.
Researchers also need to focus on determining how applicants react to
these facets of the interview structure. Our findings were mixed with regard to the influence of interview structure on applicant reactions. Contrary
to Gilliland and Steiner (1999), little evidence was found linking higher
use of interview structure to applicant perceptions of procedural justice.
Although it is possible that interviewer explanations of how selection procedures are conducted would moderate the relationship between structure
and applicant reactions, post hoc analysis of our findings using a single
item examining applicants perceived explanation of interview processes
did not reveal an interaction of explanations with interview structure to
predict applicant reactions.
We also found that interview structure factors did not directly influence applicants intentions to accept a job offer. However, when testing
Campion et al.s (1997) fine-grained predictions, we found that applicants
reacted to salient interview elements (e.g., length of interview; number of
interviewers), which is also consistent with Gilliland and Steiners (1999)
suggestions. Applicants may view longer interviews as a signal that the
organization is interested in them and that they are simply reciprocating
(Rynes et al., 1991). This effect may help explain why our results related
to the focus of the interview departed from those found by Barber et al.
(1994) who found that applicants reacted more positively to a condition
that included both selection and recruiting elements. Because their combined condition was longer than the recruiting-only condition, their results
may be partially explained by the differences in interview length (which
we have found here to be positively related to applicant intentions). Another possibility is that the Barber et al. result is based on the recruiting
interview of a single organization, whereas our results include hundreds
of permutations of recruiting techniques. Furthermore, a poorly designed
recruiting interview may be detrimental or perceived as being oversold,
whereas better recruiting techniques may be effective at increasing attraction while avoiding the appearance of desperation. Further investigation is
needed to clarify this outcome. Why applicants reacted positively to panel
interviews is not readily apparent. The relationships between the number
of interviewers and interview length were not significant, thereby ruling
out the possibility that the number of interviewers increases interview
length and leads to more positive applicant perceptions.

CHAPMAN AND ZWEIG

699

Applicants did react negatively to Question Consistency and Question


Sophistication. This is perhaps not surprising as employing more sophisticated questions will make it more challenging for applicants to manage
their impressions in a favorable manner. It is worrisome, however, that
perceived difficulty was significantly and negatively related to perceived
procedural justice and also negatively related to acceptance intentions.
These results suggest that interviewers may need to be concerned about
potential negative indirect effects of interview structure on recruiting. This
negative recruiting consequence may be offset by any increases in the predictive validity of the interview due to increasing structure.

Limitations

The fact that this study was conducted in a field setting is both a source
of strength and weakness in the design. The use of surveys limits conclusions about causal relationships among the variables, although a partial
replication of the results in a second sample is encouraging. Furthermore,
although this study involved gathering information from multiple sources
(e.g., interviewers and applicants), some of the analyses were conducted
with data obtained from a common source. Accordingly, the potential
exists for common method biases to influence some of the outcomes
(Podsakoff, MacKenzie, & Podsakoff, 2003). However, the study was
designed in such a way as to minimize the potential influence of common
method variance. For example, relationships examined from a common
source (i.e., the interviewer) were between self-reported behaviors and
attitudes rather than attitudeattitude relationships that are more likely to
create problems. In addition, the behavior questions regarding how the
interview was structured were simple, unambiguous, and highly specific,
making them less susceptible to common method bias (Podsakoff et al.,
2003). Furthermore, given that the results are inconsistent with what we
would expect to find if socially desirable responding were affecting our
findings (e.g., trained interviewers also reported engaging in higher levels of Rapport Building), we ruled out socially desirable responding as a
threat to our conclusions.
In order to gain access to the sample of real interviewers, time and
space limitations for the survey necessitated the use of single items to
operationalize some of the constructs. Wanous, Reicher, and Hudy (1997)
suggest that single-item measures are adequate if the constructs being
measured are sufficiently narrow or are unambiguous to respondents. We
believe that our single-item measures are sufficiently straightforward to
capture factual assessments of past events and recent experiences. Nevertheless, future research should employ multi-item measures. Furthermore,

700

PERSONNEL PSYCHOLOGY

additional items should be added to enhance the psychometric properties


of the Rapport Building and Question Sophistication scales.
Although this study represents the first available data on the structure
of interview practices from real interviewers, the interviews examined
here consisted of campus interviews. Therefore, further work is needed
to confirm whether these findings would hold in other contexts or for
permanent positions. For example, it is possible that the strength of applicant reactions to interview structure components might be mitigated by
the fact that applicants were applying for short-term positions. Furthermore, organizations might not be as rigorous in training on-campus versus
off-campus interviewers or in requiring adherence to structure protocols
when conducting on-campus interviews. However, we speculate that differences in interviewer or applicant reactions, if any, would be minimal.
Discussions held with the interviewers, applicants, and administrators at
the university suggest that these positions, although temporary, are often
used as a means to gain full-time employment in these organizations. Interviewers are therefore motivated to find the best possible applicants for
potential long-term employment, and applicants are motivated to obtain
the most desirable placement to full-time employment opportunities following graduation. Finally, interviewers from larger organizations might
have been more likely to receive training and employ structured interviews.
However, to eliminate the possibility that organization size was responsible for any relationships between training and interview structure, we
conducted a t-test comparing trained and untrained interviewers, which revealed no significant difference in training as a result of organization size.
Furthermore, organization size was uncorrelated with interview structure
use.
We hope that this study will encourage more research on topics related
to interview structure. We identified four factors of interview structure
that offer researchers an initial starting point to examine important issues
relating to the reliability, validity, and utility of structured interviews. Furthermore, we examined reactions to actual structured interviews from the
perspective of interviewers and applicants. The present findings suggest
that future research should treat interview structure as a multidimensional
and continuous construct rather than a single construct with varying levels.
These findings also provide encouraging evidence that interviewer training can enhance the likelihood of interviewers adopting some elements
of structured interviews. Lastly, identifying how patterns of structure use
and interactions of structure elements influence applicant and interviewer
reactions may extend our understanding of how to enhance the predictive
validity of selection and recruitment activities.

CHAPMAN AND ZWEIG

701

REFERENCES
Barber AE, Hollenbeck JR, Tower SL, Phillips JM. (1994). The effects of interview focus
on recruitment effectiveness: A field experiment. Journal of Applied Psychology, 79,
886896.
Campion MA, Palmer DK, Campion JE. (1997). A review of structure in the selection
interview. PERSONNEL PSYCHOLOGY, 50, 655702.
Campion MA, Purcell ED, Brown BK. (1988). Structured interviewing: Raising the psychometric properties of the employment interview. PERSONNEL PSYCHOLOGY, 41,
2542.
Chapman DS, Rowe PM. (2002). The influence of videoconference technology and interview structure on the recruiting function of the employment interview: A field
experiment. International Journal of Selection and Assessment, 10, 185197.
Chapman DS, Uggerslev KL, Webster J. (2003). Applicant reactions to face-to-face and
technology-mediated interviews: A field investigation. Journal of Applied Psychology, 88, 944953.
Conway JM, Jako RA, Goodman GF. (1995). A meta-analysis of interrater and internal
consistency reliability of selection interviews. Journal of Applied Psychology, 80,
565579.
Day DV, Sulsky LM. (1995). Effects of frame-of-reference training and information configuration on memory organization and rating accuracy. Journal of Applied Psychology,
80(1), 158167.
Dipboye RL. (1992). Selection interviews: Process perspectives. Cincinnati, OH: SouthWestern.
Dougherty TW, Ebert RJ, Callender JC. (1986). Policy capturing in the employment interview. Journal of Applied Psychology, 71, 915.
Gatewood R, Lahiff J, Deter R, Hargrove L. (1989). Effects of training on behaviors of the
selection interview. Journal of Business Communication, 26, 1731.
Gilliland SW. (1993). The perceived fairness of selection systems: An organizational justice
perspective. Academy of Management Review, 18, 694734.
Gilliland SW. (1995). Fairness from the applicants perspective: Reactions to employee
selection procedures. International Journal of Selection and Assessment, 3(1), 11
19.
Gilliland SW, Steiner DD. (1999). Applicant reactions. In Eder RW, Harris MM (Eds.),
The employment interview handbook (pp. 6982). Thousand Oaks, CA: Sage.
Gollub-Williamson LR, Campion JE, Malos SB, Roehling MV, Campion MA. (1997). The
employment interview on trial: Linking interview structure with litigation outcomes.
Journal of Applied Psychology, 82, 900912.
Hakel M. (1989). The state of employment interview theory and research. In Eder
RW, Ferris GR (Eds.), The employment interview: Theory, research, and practice
(pp. 285293). Newbury Park, CA: Sage.
Harris MM. (1999). What is being measured? In Eder RW, Harris MM (Eds.), The employment interview handbook (pp. 143158). Thousand Oaks, CA: Sage.
Huffcutt AI, Arthur W Jr. (1994). Hunter and Hunter (1984) revisited: Interview validity
for entry-level jobs. Journal of Applied Psychology, 79, 184190.
Huffcutt AI, Conway JM, Roth PL, Stone NJ. (2001). Identification and meta-analytic
assessment of psychological constructs measured in employment interviews. Journal
of Applied Psychology, 86(5), 897913.
Hysong SJ, Dipboye RL. (1998, April). The recruiting outcomes of interview structure and
post-interview opportunity. Poster session presented at the 13th Annual Conference
of the Society for Industrial and Organizational Psychology, Dallas, TX.

702

PERSONNEL PSYCHOLOGY

Hysong SJ, Dipboye RL. (1999, May). Individual differences in applicants reactions to
employment interview elements. In Dipboye RL (Chair), Symposium conducted
at the 14th Annual Conference of the Society for Industrial and Organizational
Psychology, Atlanta, GA.
Kline RB. (1988). Principles and practices of structural equation modeling. New York:
Guilford.
Kohn LS, Dipboye RL. (1998). The effects of interview structure on recruiting outcomes.
Journal of Applied Social Psychology, 28, 821843.
Latham GP, Finnegan BJ. (1993). Perceived practicality of unstructured, patterned, and
situational interviews. In Schuler H, Farr JL, Smith M (Eds.), Personnel selection
and assessment: Individual and organizational perspectives (pp. 4145). Hillsdale,
NJ: Erlbaum.
McDaniel MA, Whetzel DL, Schmidt FL, Maurer SD. (1994). The validity of employment interviews: A comprehensive review and meta-analysis. Journal of Applied
Psychology, 79, 599616.
Palmer DK, Campion MA, Green PC. (1999). Interviewing training for both applicant and
interviewer. In Eder RW, Harris MM (Eds.), The employment interview handbook
(pp. 337352). Thousand Oaks, CA: Sage.
Podsakoff PM, MacKenzie SB, Podsakoff NP (2003). Common method biases in behavioral
research: A critical review of the literature and recommended remedies. Journal of
Applied Psychology, 88(5), 879903.
Posthuma RA, Morgeson FP, Campion MA. (2002). Beyond employment interview validity: A comprehensive narrative review of recent research and trends over time.
PERSONNEL PSYCHOLOGY, 55(1), 181.
Powell GN, Goulet LR. (1996). Recruiters and applicants reactions to campus interviews
and employment decisions. Academy of Management Journal, 39, 16191640.
Ryan AM, Ployhart RE. (2000). Applicants perceptions of selection procedures and decisions: A critical review and agenda for the future. Journal of Management, 26,
565606.
Rynes SL. (1989). The employment interview as a recruitment device. In Eder R, Ferris
G (Eds.), The Employment interview: theory, research, and practice (pp. 127142).
Newbury Park: Sage.
Schmitt N. (1999). The current and future status of research on the employment interview.
In Eder RW, Harris MM (Eds.), The employment interview handbook (pp. 355368).
Thousand Oaks, CA: Sage.
Smither JW, Reilly RR, Millsap RE, Pearlman K, Stoffey RW. (1993). Applicant reactions
to selection procedures. PERSONNEL PSYCHOLOGY, 46, 4976.
Turban DB, Dougherty TW. (1992). Influences of campus recruiting on applicant attraction
to firms. Academy of Management Journal, 35(4), 739765.
Truxillo DM, Bauer TN, Campion MA, Paronto ME. (2002). Selection fairness information
and applicant reactions: A longitudinal field study. Journal of Applied Psychology,
87, 10201031.
Wanous JP, Reichers AE, Hudy MJ. (1997). Overall job satisfaction: How good are singleitem measures? Journal of Applied Psychology, 82(2), 247252.
Wiesner WH, Cronshaw SF. (1988). A meta-analytic investigation of the impact of interview
format and degree of structure on the validity of the employment interview. Journal
of Occupational Psychology, 61, 275290.

You might also like