Professional Documents
Culture Documents
CHAPsdffMAN Et Al-2005-Persdfsonnel Sdfpsychology
CHAPsdffMAN Et Al-2005-Persdfsonnel Sdfpsychology
CHAPsdffMAN Et Al-2005-Persdfsonnel Sdfpsychology
A review by Campion, Palmer, and Campion (1997) identified 15 elements of interview structure and made predictions regarding how
applicants and interviewers might react to these elements. In this 2sample field survey of 812 interviewees and 592 interviewers from over
502 organizations, interview structure was best described by 4 dimensions: (a) Questioning Consistency, (b) Evaluation Standardization, (c)
Question Sophistication, and (d) Rapport Building. Interviewers with
formal training and those with a selection rather than recruiting focus
employed higher levels of interview structure. In addition, reactions to
increased structure were mixed. Both higher structure (Question Sophistication) and lower structure (Rapport Building) were positively related
to interviewer reactions. Less than 34% of interviewers had any formal
interview training. However, interviewers were confident that they could
identify the best candidates regardless of the amount of interview structure employed. Applicants reacted negatively to the increased perceived
difficulty of structured interviews, but perceptions of procedural justice
were not affected by interview structure.
673
674
PERSONNEL PSYCHOLOGY
the location of the interview (e.g., campus interview), the purpose of the
interview (recruiting or selection focus), the medium used to conduct the
interview (telephone, videoconference), the number of interviewers (e.g.,
panel interviews), and by the amount of structure in the interview (structured, semi-structured, or unstructured). It is this last concept, interview
structure that is the focus of this article.
This research responds to calls for an empirical examination of the
nomological network of interview structure (Campion et al., 1997; Harris,
1999; McDaniel, Whetzel, Schmidt, & Maurer, 1994) and to calls for investigating how applicants react to various components of interview structure
(Campion et al., 1997; Gilliland & Steiner, 1999). Although most recent reviews of the interview structure literature contend that interview structure
is best measured as a continuous and multifaceted construct (see Campion
et al., 1997), no standard measure of interview structure as a continuous
and multifaceted construct exists to describe the level of structure employed in interviews. Accordingly, our primary goal was to identify the
factor structure of interview structure practices and to develop a measure
of interview structure that can be used in selection research. In doing so,
we also examined the antecedents of interview structure by investigating the relationships among formal interviewer training, interviewers use
of interview structure, focus of the interview (recruitment or selection),
and perceived efficacy of interview functions. Finally, we investigated
how interviewers and applicants react to specific elements of interview
structure as suggested by Campion et al. (1997) and Gilliland and Steiner
(1999).
This research was conducted in two stages spanning 3 years. With
Sample A, we explored the factor structure of interview structure practice as well as focusing on applicant reactions to those factors. Although
some studies have investigated the dimensionality of interview structure
with students (e.g., Hysong & Dipboye, 1999) and through an analysis
of legal cases (e.g., Gollub-Williamson, Campion, Malos, Roehling &
Campion, 1997), this is the first investigation that examines how real
interviewers adopt structure when engaging in actual job interviews.
Additional data gathered from Sample B expands on the nomological
network of interview structure discovered in Sample A by addressing
questions about interviewer training and interviewer reactions to structured
interviews.
What Is Interview Structure?
675
applicants, which can translate into the degree of discretion that an interviewer is allowed in conducting the interview (p.186). Most researchers
agree that increasing interview structure can improve the psychometric
properties of interviews (Campion, Pursell & Brown, 1988) as well as enhance their predictive validity (McDaniel, Whetzel, Schmidt, & Maurer,
1994; Wiesner & Cronshaw, 1988). However, it is unclear what is meant
by interview structure (e.g., Hakel, 1989). Until recently, interview researchers have tended to classify interviews dichotomously, as being either structured or unstructured. Kohn and Dipboye (1998) went further to
suggest that interviews could also be categorized as semi-structured if they
employed some structure elements but not all. Huffcutt and Arthur (1994)
made an excellent argument for considering interview structure as both
a multifaceted and continuous construct. Although their structure classification system was a vast improvement over the structured/unstructured
dichotomy, it continued to be a single variable consisting of four levels
of structure. What is evident from Campion et al.s seminal article, and
those on which it was based, is that one could easily replace the term
structured interview with good interview or a valid interview. That is,
Campion et al.s description of structure components consists of a list of
interview practices that have been associated with higher predictive validities. Although very helpful for practitioners who wish to improve the
predictive validity of their interviewers, Campion et al.s description does
not offer interview researchers a measure to assess the level of interview
structure.
A review of the literature suggests several approaches to developing
a nomological network for the structured interview; however, there is little empirical evidence on which to base a solid prediction. Based on an
extensive review of the interview literature, Campion et al. (1997), proposed that their 15 areas of interview structure could be categorized into
two factors (interview content and evaluating applicants). Hysong and
Dipboye (1998) proposed a very plausible three-factor model of structure
consisting of (a) job relatedness of questions, (b) question standardization,
and (c) applicant voice, but were unable to test whether these factors were
independent. Huffcutt and Arthur (1994) contributed an interesting and
detailed coding procedure based on varying levels of two dimensions: the
standardization of (a) interview questions and (b) response scoring. Ultimately, they collapsed this framework into a single dimension of interview
structure consisting of four possible levels. Huffcutt, Conway, Roth, and
Stone (2001) also collapsed their four-level structure framework into two
to investigate constructs assessed in employment interviews. Gilliland and
Sterner (1999) suggested that there might be eight interview factors relevant to applicant reactions. Again, none of these frameworks have been
empirically tested. Hysong and Dipboye (1999) conducted one of the few
676
PERSONNEL PSYCHOLOGY
The second goal of this investigation was to understand what conditions lead interviewers to employ various interview structure elements.
We examined several possible antecedents of interview structure that we
detail next.
Interviewer training. Most interview practitioners and researchers
agree that formal interviewer training is essential to successful recruiting
and selection practices. Researchers argue that formal training can be
used to improve a variety of interviewer tasks including establishing valid
criteria for job analysis, evaluating candidates more effectively (e.g., Day
& Sulsky, 1995; Dougherty, Ebert, & Callender, 1986), improving rapport
(Gatewood, Lahiff, Deter, & Hargrove, 1989), and improving the recruiting
function of the interview (Chapman & Rowe, 2002; Rynes, 1989).
Despite the widespread endorsement of interviewer training, the extent
to which interviewers are receiving training and whether this training is
effective remains an area that has been neglected in the literature (Palmer
et al. 1999). Indeed, the existing research has been mixed regarding the efficacy of training in improving interviewing rater effectiveness (see Palmer
677
et al., 1999; Posthuma, Morgeson, & Campion, 2002, for detailed reviews).
Furthermore, little attention has been paid to what elements of interviewer
training might lead to increased adherence to interview structure (Palmer
et al., 1999; Schmitt, 1999). Training has sometimes been conceptualized
as a component of interview structure (e.g., Campion et al., 1997). However, as our approach was to examine the factor structure of interviewer
practices, we classified interviewer training as an antecedent to structure
rather than an activity carried out as a part of the interview. Most interview researchers agree, however, that interviewer training ought to lead to
greater use of interview structure (e.g., Campion et al., 1997) and that formal training is thought to be the most popular way to inform interviewers
about the advantages of adopting a more controlled structured interview
(e.g., Dipboye, 1992). As such, we hypothesize that:
Hypothesis 1: Formally trained interviewers are more likely to structure
their interviews than untrained interviewers.
Interview focus. Rynes (1989) highlighted the duality of the employment interview as both a selection device and a recruiting tool. Classifying this dual purpose of employment interviews as interview focus, Rynes
also noted that there might be differences in the extent to which interviewers focus on the selection function or the recruiting function. Several
researchers have argued that these roles may conflict in that a greater emphasis on selection could reduce the attractiveness of the organization and
a greater emphasis on recruiting could reduce the validity of selection
decisions (Chapman & Rowe, 2002; Harris, 1999; Hysong & Dipboye,
1998). Barber, Hollenbeck, Tower, and Phillips (1994) manipulated interview focus so that applicants received either a recruitment interview or
one that combined recruitment and selection elements. They found that
student applicants for a part-time research assistant position reacted more
favorably to a combined recruitment/selection interview than those who
received only the recruitment portion of their interview.
Despite the dual function of the employment interview, researchers
have largely concentrated on how to improve the selection side, and as a
result, most suggestions for structured interviews are targeted at improving selection (Campion et al., 1997; Posthuma et al., 2002). It follows that
interviewers who are primarily interested in identifying suitable candidates (selection focus) would benefit most from employing higher levels
of interview structure. Accordingly, it is likely that interview focus will
predict interviewers use of the structured interview and we hypothesize
that:
Hypothesis 2: Interviewers with a selection focus will employ more highly
structured interviews than those with a recruitment focus.
678
PERSONNEL PSYCHOLOGY
679
Interest in how applicants react to selection procedures and perceptions of procedural justice in particular has been growing rapidly in I-O
psychology (see Gilliland, 1993; Gilliland & Steiner, 1999). Negative applicant reactions have been predicted to cause premature withdrawal from
selection procedures, negative public relations for organizations, reduced
attractiveness of organizations, and rejection of job offers (e.g., Chapman
et al., 2003; Gilliland, 1993; Smither, Reilly, Millsap, Pearlman, & Stoffey,
1993; Truxillo, Bauer, Campion, & Paronto, 2002). We suggest that the
dual roles of selection and recruitment (Rynes, 1989) make the employment interview particularly susceptible to negative applicant reactions.
Furthermore, the intense interpersonal interaction that occurs in employment interviews and the increased opportunities to make mistakes that
might offend applicants makes applicant reactions a vital consideration
for interviewers.
It has been proposed that applicants might view elements of interview
structure as being more fair (Campion et al., 1997; Gilliland & Steiner,
1999), and there are compelling reasons why this might be so. The consistency of questioning and rating individuals should reassure applicants
that they are being treated equally and fairly (Gilliland & Steiner, 1999).
One might expect that being treated equally would lead to more favorable
impressions of the organization (Gilliland, 1993). However, with the exception of one study in which interview structure did not predict attraction
to the firm or positive recruiter perceptions (Turban & Doherty, 1992), the
empirical evidence collected on applicant reactions to interview structure
suggests that applicants tend to react negatively to highly structured interviews (Chapman & Rowe, 2002; Hysong & Dipboye, 1998; Latham
& Finnegan, 1993). One explanation for these negative reactions is that
applicants are motivated to present themselves favorably to interviewers
and that highly structured interviews reduce their ability to manage their
impressions (Chapman & Rowe, 2002; Posthuma et al., 2002).
Most studies finding negative applicant reactions have measured applicant perceptions of procedural justice only. It is possible that factors other
than procedural justice reactions might be of more importance for organizational attractiveness (e.g., Gilliland & Steiner, 1999; Ryan & Ployhart,
2000). Furthermore, as noted earlier, others have suggested that applicants
might react to individual elements of structured interviews rather than to
overall structure (Campion et al., 1997; Gilliland & Steiner, 1999).
680
PERSONNEL PSYCHOLOGY
Method
The data for Sample A of this study were part of a larger study involving
2300 employers conducting interviews for 4-month cooperative education
work terms at a large North American university campus over a 2-year
period. Data were ultimately collected from two samples (A and B) as
described below
Participants
Two samples were used to conduct the analyses. Results for Sample A
were drawn from interviews conducted by 1,500 employers with approximately 4,000 applicants over a 3-week period. After removing interviews
conducted by telephone and videoconference, our final sample consisted
of data from 812 applicants (mean age = 20.57 years, SD = 1.88; 55.4%
men) who engaged in face-to-face interviews conducted by 428 interviewers from 338 organizations representing a wide variety of industries (28%
response rate for interviewers). Unfortunately, exact participation rates are
unavailable for the applicants. However, feedback from research assistants
collecting the data indicated a high response rate (approximately 80%). In
exchange for their participation, interviewers were promised a synopsis of
the results when the study was completed. Men conducted 58.7% of the
interviews, women conducted 19.6% of the interviews, and 21.3% of the
interviews were conducted by both men and women.
Participants from Sample B were recruited approximately 1 year later
in the same manner as Sample A participants. Furthermore, Sample B
respondents were instructed not to complete the survey if they had participated in the earlier study. Sample B included 164 interviewers from
a population of 1,000 organizations approached (16.4% response rate)
during their campus recruiting activities. The interviewers represented
681
Questionnaires were completed by both interviewers and their applicants in Sample A immediately following the interview and by interviewers only in Sample B. The specific measures given to each are detailed
below.
Interviewer
I use anchored rating scales to evaluate the candidates response to each question
Decisions about the applicant are made by combining scores statistically, rather
than making a global impression of their attractiveness
Each answer is rated against an ideal response
I rate the applicant on several dimensions independently (e.g., communication
skills, thinking skills).
I use behavioral questions designed to get applicants to relate specific
accomplishments to the requirements of the job
I use hypothetical or situational questions
I follow up answers with probing questions
I prompt candidates and allow them to elaborate on their answers
I ask the same questions to every candidate
The same interviewer asks the questions to each of the applicants
Questions are linked to a job description
Items
0.84
10.55
1.96
1.10
1.51
1.93
1.29
1.11
1.33
2.10
1.27
3.79
5.08
5.65
5.47
5.23
5.12
1.45
30.46
4.19
1.83
4.53
1.27
1.83
1.52
3.44
5.46
6.31
1.72
1.85
SD
2.30
2.95
Mean
.27
.20
.28
.52
.52
.30
.84
.68
Evaluation
Standardization
TABLE 1
Sample A: Item Loadings on Three Factors of Interview Structure
.62
.61
.46
.23
.63
.21
.46
.14
.21
Question
Sophistication
.52
.50
.36
.14
.46
.20
.26
Question
Consistency
682
PERSONNEL PSYCHOLOGY
2.27
2.00
1.67
1.44
1.88
1.74
1.72
1.39
1.62
2.05
2.00
1.45
1.96
1.31
1.98
1.49
1.95
1.59
3.18
2.92
2.94
4.36
3.73
4.56
4.55
4.09
5.06
4.82
3.44
5.23
3.31
5.14
5.10
5.43
3.95
5.28
2.01
1.63
2.89
2.10
SD
Mean
Item
.47
.44
.74
.65
.82
.76
Evaluation
Standardization
.25
.91
.64
.47
Question
Sophistication
TABLE 2
Sample B: Estimated Item Loadings from CFA on Four Factors of Interview Structure
.83
.80
.80
.78
.67
.38
.32
Question
Consistency
1.35
.27
.15
Rapport
Building
684
PERSONNEL PSYCHOLOGY
TABLE 3
Training Content for Interviewers with Formal Interview Training
% included
Content area
Job requirements for the position(s) being filled
Background and purpose of the interview
Legal issues
How to write interview questions
How to evaluate answers
Rapport Building
How to make decisions from interview data
How to select questions/probes from a question bank
Practice role playing
How to use questions that were prepared previously
Note taking
Job analysis
Recruiting the candidate (promoting the organization)
How to avoid rating errors
Realistic job previews
Videotaping role playing with feedback
a
b
Sample Aa
Sample Bb
90.8
88.7
82.7
80.6
79.3
79.1
77.9
75.5
73.2
72.1
68.1
65.0
63.6
50.4
47.3
29.9
86.7
95.6
82.2
86.7
82.2
84.4
84.4
66.7
86.7
75.0
71.1
67.4
62.2
55.6
57.8
40.0
(c) Interview focus: Due to questionnaire space limitations for both samples, a single item ranging from 1 to 7 was used to capture the extent to
which interviewers described the purpose of their interview as being
either predominantly recruiting/attraction 1 to predominantly screening/selection 7.
(d) Interviewer reactions: A single item was used in Samples A and B
to measure interviewers affective reactions to conducting the interview (rated 1 to 7 with 7 indicating a more positive reaction). Two
items measured the interviewers perceived efficacy of their interviews for (a) the ability to identify the best candidates for the position
and (b) the ability to attract applicants to work for their organization
(rated from 1 to 7; Sample B only).
Applicant
685
Gilliland, 1993; 1995). A sample item was, The interviewer questions were relevant to the job and was rated on a 7-point scale where
1 = strongly disagree to 7 = strongly agree ( = .80).
(b) Perceived difficulty: Participants were asked to indicate how difficult the interview was on eight items (e.g., I had difficulty coming
up with good answers to the interviewers questions) using a 7point scale ranging from 1 = strongly disagree to 7 = strongly agree
( = .85).
(c) Post-interview intentions: Participants were asked to indicate their
likelihood of accepting an offer based on an item from Powell and
Goulet (1996) ranging from 0% to 100%.
Procedure
Our hypotheses and research questions were tested across two samples.
In order to examine the factor structure of employment interview practices
(R1), a principal axis factor analysis with oblique rotation was conducted
on the 16 items related to interview structure from Sample A. Sample B
data were used to conduct a confirmatory factor analysis (CFA) based on
the exploratory findings from Sample A. Hypothesis 1 was tested by conducting multivariate GLM analyses examining mean differences in structure levels between interviewers who reported formal training and those
who did not for both Samples A and B. Hypothesis 2 was tested in both
samples by examining zero-order correlations between interview focus
(recruiting vs. selection) and the interview structure factors. Hypothesis 3
686
PERSONNEL PSYCHOLOGY
The results of the principal axis factor analysis for R1 initially extracted
five factors with eigenvalues greater than 1 from the 16 items administered
in Sample A. However, a combination of examining the scree test, the
percentage of variability explained by the factor, and assessing whether
the factors could be interpreted meaningfully suggested that three factors
would best describe the amount of structure employed by an interviewer.
Accordingly, a principal axis factor analysis with oblique rotation was
conducted forcing three factors (see Table 1). This yielded a satisfactory
and interpretable set of factors labeled: Evaluation Standardization, Question Sophistication, and Question Consistency. This solution explained
43.45% of the variability in the interview structure items. The first factor,
Evaluation Standardization, contained four items measuring the extent to
which the interviewer uses standardized and numeric scoring procedures
( = .71). The second factor, Question Sophistication, consisted of three
items that measure the extent to which the interviewer used question
content that corresponded to formats known to be more valid and recommended by researchers such as job-related behavioral questions and
situational questions ( = .67). The third factor, Question Consistency,
consisted of three items related to asking the same questions, in the same
order, to every candidate ( = .45). Five items that did not load on any
of the three factors were discarded. The scales were computed using a
unit weighting of the items loading on these factors and were used for the
remaining analyses based on Sample A.
In order to improve the psychometric properties of these scales and to
expand the interviewer behaviors to include those designed to establish
rapport with the applicant, 20 additional items were added for examination in Sample B. Items were developed for each of the three factors from
Sample A and a set of items were developed to explore a potential fourth
factor examining interview content that was not job related such as asking
questions about an applicants hobbies or making light conversation. This
practice is frequently espoused in the popular interviewing literature in order to relax applicants and establish rapport. It also reflects Rynes (1989)
dual nature of the interview by including recruiting-oriented behaviors.
687
688
PERSONNEL PSYCHOLOGY
Training. Only 34% of the interviewers in Sample A and 28% of interviewers in Sample B reported having any formal interview training. For
those who completed some formal training, Table 3 reports the percentage of interviewers who received training on specific training elements
for each sample. The average training time for these interviewers was
4.31 hours.
Hypothesis 1 predicted that formally trained interviewers would be
more likely to structure their interviews. In order to test this hypothesis in Sample A, a 2 3 mixed model GLM test with type III sums of
squares was conducted to assess whether formal training is related to the
overall use of structure or if formal training predicts the use of certain
elements of interview structure (2 levels of interview training were tested
as a between-subjects effect, and the three interview structure elements
were tested as within-subject effects for Sample A). The same analysis was
replicated in Sample B except that there were four within-subject structure
elements examined in a 2 4 mixed model GLM analysis. As predicted in
Hypothesis 1, interviewers who received formal training were more likely
to use higher levels of structure in their interviews. Interestingly, this was
not uniformly true across individual structure factors. For Sample A, the
multivariate test of the relationship between interview structure and formal training showed a significant main effect of interviewer training using
Pillais Trace statistic F(2, 354) = 249.36, p < .001, 2 = .59. However,
there was a significant interaction between structure factor and interviewer
training, which suggested that training did not have a uniform effect on the
structure levels for all structure factors (Pillais Trace (2, 354) = 17.38,
p < .001, 2 = .09). Figure 1 shows a significant increase in structure levels
for both Evaluation Standardization and Question Sophistication but no
significant increase in Questioning Consistency due to formal interviewer
training. The results of the multivariate analysis for Sample B were similar
to those in Sample A. Training had a main effect on the level of structure
1.53
30.46
4.71
.31
3.67
4.77
5.27
5.77
5.90
3.11
74.06
.74
10.55
1.90
.46
1.21
1.12
1.11
1.08
.61
1.12
22.37
SD
N
144
273
337
357
363
363
363
357
144
143
143
.14
.00
.12
.20
.04
.03
.17
.16
.04
.00
.16
.19
.19
.25
.05
.13
.08
.08
.16
.09
.14
.00
.14
.10
.06
.11
.10
.35
.36
.05
.14
.06
.14
.02
Coded 1 = men and 2 = women and excludes interviews conducted by multiple interviewers.
17 where 1 = total recruiting focus and 7 = total selection focus.
c
Coded: 0 = no, 1 = yes.
1. Interviewer gendera
2. Length of interview (minutes)
3. Interview focusb
4. Formal trainingc
5. Evaluation Standardization
6. Question Sophistication
7. Question Consistency
8. Interviewer affective reaction
9. Procedural justice
10. Perceived difficulty
11. Acceptance intentions
.44
.38
.11
.12
.15
.05
.17
.23
.01
.19
.15
TABLE 4
Descriptive Statistics and Correlations Among Variables for Sample A
.04
.11
.13
.12
.13
.12
.04
.20
.24
.22
10
11
6888
30.57
.28
4.44
4.78
2.94
4.19
4.89
5.63
5.81
5.20
SD
17,830
10.14
.45
2.17
1.27
1.33
1.18
1.19
1.12
.76
1.18
N
140
160
160
158
161
161
161
161
161
161
160
.05
.07
.14
.16
.16
.12
.02
.14
.08
.12
.20
.04
.02
.10
.23
.07
.16
.10
.18
1. Number of employees
2. Length of interview (minutes)
3. Formal traininga
4. Interview focusb
5. Question Consistency
6. Evaluation Standardization
7. Question Sophistication
8. Rapport Building
9. Interviewer affective reaction
10. Selection efficacy
11. Recruiting efficacy
.12
.13
.20
.32
.00
.10
.08
.18
.10
.17
.08
.00
.02
.18
.13
.44
.13
.07
.02
.15
.13
.26
.09
.09
.11
.10
TABLE 5
Descriptive Statistics and Correlations Among Variables for Sample B
.00
.20
.13
.19
.17
.04
.20
.27
.27
.13
10
11
690
PERSONNEL PSYCHOLOGY
7
6
5
4
3
2
1
0
691
Untrained
Question
Sophistication
Evaluation
Standardization
Trained
Question
Consistency
Degree of Structure
Structure Factors
used (Pillais Trace (3, 156) = 82.60, p < .001, 2 = .61). Furthermore,
as in Sample A, there was a significant interaction between structure factor and training on level of structure used (Pillais Trace (3, 156) = 3.16,
p < .05, 2 = .06). There was a significant increase in the use of Evaluation
Standardization, Question Sophistication, and Questioning Consistency;
however, the amount of Rapport Building was not affected by formal training (one would expect lower levels of Rapport Building to be associated
with training as this is consistent with higher structure). Thus, Hypothesis 1
was largely supported across both samples.
Interview focus. An examination of the zero-order correlations between interview focus and interview structure elements shows that interview focus was associated with the level of structure used in the employment interview. Small but significant relationships were found between
interview focus and Evaluation Standardization (r = .17, p < .05) and
Question Consistency (r = .14, p < .05) such that a greater selection focus
was associated with higher levels of these structure factors but Question
Sophistication was unaffected by interview focus. In Sample B, a similar pattern emerged with the size of the correlations being similar for
Evaluation Standardization (r = .17, p < .05) and Question Consistency
(r = .10, ns) whereas neither Question Sophistication nor Rapport Building appeared to be affected by interview focus. Thus, some support was
found for Hypothesis 2.
Interviewer Reactions to Structure Elements
692
PERSONNEL PSYCHOLOGY
5. Longer interview
6. Control ancillary info
4. Better questions
Content
1. Job analysis
2. Same questions
3. Limit prompting
.08
.06
.17
0
0
PJ
+
0
.12
+
+
+
0
Predicted
.11
.04
.04
.10
.12
.11
.06
.04
.07
Observed
Interviewer reactions
.17
.07
.14
.05
.10
.01
.11
.12
PI
Candidate reactionsa
TABLE 6
Results of Predictions of Structure Element Effects for Candidate and Interviewer Reactions from Campion et al. (1997) with Items
Developed for This Study (Sample A Only)
.13
.07
+
0
.21
.01
.08
.06
.09
.07
.20
.06
.11
.07
.10
PI
.10
.11
.02
PJ
Candidate reactionsa
P
0
p < .05.
Evaluation
8. Rate each answer on
multiple scales
9. Anchored rating scales
TABLE 6 (continued)
.05
np
.07
.03
.09
.04
.05
.15
np
np
.03
Observed
np
Predicted
Interviewer reactions
694
PERSONNEL PSYCHOLOGY
695
those of Gilliland and Steiner and cover the largest number of interview
structure elements.
As indicated in Table 6, four of the 13 predicted relationships between
structure elements and interviewer reactions were supported. For example,
significant relationships were found between the use of better questions
and positive interviewer reactions (r = .12, p < .05, r = .11, p < .05,
for situational and behavioral questions, respectively). Furthermore, using
anchored rating scales was related to more positive interviewer reactions
(r = .15, p < .05). However, contrary to Campion et al.s prediction,
discouraging applicant questions until the end of the interview was found
to be positively related to more positive interviewer reactions (r = .11,
p < .05).
Applicant Reactions
696
PERSONNEL PSYCHOLOGY
terviewers (r = .20, p < .05) as being less fair. Interestingly, post hoc
analyses revealed that applicants were less likely to accept an offer from
an organization when the interviewer had a selection focus than when a
recruiting focus was reported (r = .19, p < .05) or if the interviewer was
trained versus untrained (r = .10, p < .05).
Discussion
697
researchers recommendations regarding Question Consistency, Evaluation Standardization, and Question Sophistication, simultaneously ignore
recommendations to reduce Rapport Building, which potentially contaminates an otherwise standardized procedure. This may be due to the interviewers awareness of the dual nature of the employment interview
and having strong beliefs that building rapport is necessary to both attract
the best applicants and to encourage them to be more open in providing
information that is more predictive of future performance.
In addition to reacting positively to the interview process, it is important
that interviewers perceive their structured processes as being efficacious
for selection and recruitment. Surprisingly, none of the four interview
structure factors predicted perceived selection efficacy. Perceived recruiting efficacy was predicted by the extent to which the interviewer engaged in
Rapport Building and higher levels of Question Sophistication. Although
building rapport has long been considered useful for recruiting purposes,
it is less clear why Question Sophistication might be associated with perceived recruiting efficacy. One possibility is that interviewers believe that
they will be seen as more professional by applicants if they employ more
sophistication in their questioning techniques. In other words, interviewers might believe their questioning techniques are a signal regarding the
professionalism of the organization as a whole (Rynes, Bretz, & Gerhart,
1991). Unfortunately, applicants did not share this belief as we found
that Question Sophistication was not related to applicant intentions of
accepting a job offer.
Contrary to previous empirical findings, interviewers did not perceive
that structure practices associated with selection validity detracted from
the recruiting efficacy of the interview, nor did they perceive that Rapport
Building detracted from selection efficacy. Overall, interviewers were very
confident that they could identify the best candidates regardless of the
amount of structure they employed.
Future research should investigate whether any of the individual structure factors contribute more to the predictive validity of interviews than
others and whether current approaches to examining the validity of popular interview structuring techniques (e.g., situational and behavioral interviews) confound the types of questions used with other important
structural factors. For example, it is possible that Evaluation Standardization may be responsible for differences in validity observed between
behavioral/situational and unstructured interviews. It would also be useful
to establish whether certain combinations of these structure factors enhance the predictive validity of employment interviews. For instance, it is
possible that Question Consistency and Evaluation Standardization may
be sufficient to generate higher interview validities. It is equally plausible to suggest that Question Sophistication alone is sufficient to enhance
698
PERSONNEL PSYCHOLOGY
699
Limitations
The fact that this study was conducted in a field setting is both a source
of strength and weakness in the design. The use of surveys limits conclusions about causal relationships among the variables, although a partial
replication of the results in a second sample is encouraging. Furthermore,
although this study involved gathering information from multiple sources
(e.g., interviewers and applicants), some of the analyses were conducted
with data obtained from a common source. Accordingly, the potential
exists for common method biases to influence some of the outcomes
(Podsakoff, MacKenzie, & Podsakoff, 2003). However, the study was
designed in such a way as to minimize the potential influence of common
method variance. For example, relationships examined from a common
source (i.e., the interviewer) were between self-reported behaviors and
attitudes rather than attitudeattitude relationships that are more likely to
create problems. In addition, the behavior questions regarding how the
interview was structured were simple, unambiguous, and highly specific,
making them less susceptible to common method bias (Podsakoff et al.,
2003). Furthermore, given that the results are inconsistent with what we
would expect to find if socially desirable responding were affecting our
findings (e.g., trained interviewers also reported engaging in higher levels of Rapport Building), we ruled out socially desirable responding as a
threat to our conclusions.
In order to gain access to the sample of real interviewers, time and
space limitations for the survey necessitated the use of single items to
operationalize some of the constructs. Wanous, Reicher, and Hudy (1997)
suggest that single-item measures are adequate if the constructs being
measured are sufficiently narrow or are unambiguous to respondents. We
believe that our single-item measures are sufficiently straightforward to
capture factual assessments of past events and recent experiences. Nevertheless, future research should employ multi-item measures. Furthermore,
700
PERSONNEL PSYCHOLOGY
701
REFERENCES
Barber AE, Hollenbeck JR, Tower SL, Phillips JM. (1994). The effects of interview focus
on recruitment effectiveness: A field experiment. Journal of Applied Psychology, 79,
886896.
Campion MA, Palmer DK, Campion JE. (1997). A review of structure in the selection
interview. PERSONNEL PSYCHOLOGY, 50, 655702.
Campion MA, Purcell ED, Brown BK. (1988). Structured interviewing: Raising the psychometric properties of the employment interview. PERSONNEL PSYCHOLOGY, 41,
2542.
Chapman DS, Rowe PM. (2002). The influence of videoconference technology and interview structure on the recruiting function of the employment interview: A field
experiment. International Journal of Selection and Assessment, 10, 185197.
Chapman DS, Uggerslev KL, Webster J. (2003). Applicant reactions to face-to-face and
technology-mediated interviews: A field investigation. Journal of Applied Psychology, 88, 944953.
Conway JM, Jako RA, Goodman GF. (1995). A meta-analysis of interrater and internal
consistency reliability of selection interviews. Journal of Applied Psychology, 80,
565579.
Day DV, Sulsky LM. (1995). Effects of frame-of-reference training and information configuration on memory organization and rating accuracy. Journal of Applied Psychology,
80(1), 158167.
Dipboye RL. (1992). Selection interviews: Process perspectives. Cincinnati, OH: SouthWestern.
Dougherty TW, Ebert RJ, Callender JC. (1986). Policy capturing in the employment interview. Journal of Applied Psychology, 71, 915.
Gatewood R, Lahiff J, Deter R, Hargrove L. (1989). Effects of training on behaviors of the
selection interview. Journal of Business Communication, 26, 1731.
Gilliland SW. (1993). The perceived fairness of selection systems: An organizational justice
perspective. Academy of Management Review, 18, 694734.
Gilliland SW. (1995). Fairness from the applicants perspective: Reactions to employee
selection procedures. International Journal of Selection and Assessment, 3(1), 11
19.
Gilliland SW, Steiner DD. (1999). Applicant reactions. In Eder RW, Harris MM (Eds.),
The employment interview handbook (pp. 6982). Thousand Oaks, CA: Sage.
Gollub-Williamson LR, Campion JE, Malos SB, Roehling MV, Campion MA. (1997). The
employment interview on trial: Linking interview structure with litigation outcomes.
Journal of Applied Psychology, 82, 900912.
Hakel M. (1989). The state of employment interview theory and research. In Eder
RW, Ferris GR (Eds.), The employment interview: Theory, research, and practice
(pp. 285293). Newbury Park, CA: Sage.
Harris MM. (1999). What is being measured? In Eder RW, Harris MM (Eds.), The employment interview handbook (pp. 143158). Thousand Oaks, CA: Sage.
Huffcutt AI, Arthur W Jr. (1994). Hunter and Hunter (1984) revisited: Interview validity
for entry-level jobs. Journal of Applied Psychology, 79, 184190.
Huffcutt AI, Conway JM, Roth PL, Stone NJ. (2001). Identification and meta-analytic
assessment of psychological constructs measured in employment interviews. Journal
of Applied Psychology, 86(5), 897913.
Hysong SJ, Dipboye RL. (1998, April). The recruiting outcomes of interview structure and
post-interview opportunity. Poster session presented at the 13th Annual Conference
of the Society for Industrial and Organizational Psychology, Dallas, TX.
702
PERSONNEL PSYCHOLOGY
Hysong SJ, Dipboye RL. (1999, May). Individual differences in applicants reactions to
employment interview elements. In Dipboye RL (Chair), Symposium conducted
at the 14th Annual Conference of the Society for Industrial and Organizational
Psychology, Atlanta, GA.
Kline RB. (1988). Principles and practices of structural equation modeling. New York:
Guilford.
Kohn LS, Dipboye RL. (1998). The effects of interview structure on recruiting outcomes.
Journal of Applied Social Psychology, 28, 821843.
Latham GP, Finnegan BJ. (1993). Perceived practicality of unstructured, patterned, and
situational interviews. In Schuler H, Farr JL, Smith M (Eds.), Personnel selection
and assessment: Individual and organizational perspectives (pp. 4145). Hillsdale,
NJ: Erlbaum.
McDaniel MA, Whetzel DL, Schmidt FL, Maurer SD. (1994). The validity of employment interviews: A comprehensive review and meta-analysis. Journal of Applied
Psychology, 79, 599616.
Palmer DK, Campion MA, Green PC. (1999). Interviewing training for both applicant and
interviewer. In Eder RW, Harris MM (Eds.), The employment interview handbook
(pp. 337352). Thousand Oaks, CA: Sage.
Podsakoff PM, MacKenzie SB, Podsakoff NP (2003). Common method biases in behavioral
research: A critical review of the literature and recommended remedies. Journal of
Applied Psychology, 88(5), 879903.
Posthuma RA, Morgeson FP, Campion MA. (2002). Beyond employment interview validity: A comprehensive narrative review of recent research and trends over time.
PERSONNEL PSYCHOLOGY, 55(1), 181.
Powell GN, Goulet LR. (1996). Recruiters and applicants reactions to campus interviews
and employment decisions. Academy of Management Journal, 39, 16191640.
Ryan AM, Ployhart RE. (2000). Applicants perceptions of selection procedures and decisions: A critical review and agenda for the future. Journal of Management, 26,
565606.
Rynes SL. (1989). The employment interview as a recruitment device. In Eder R, Ferris
G (Eds.), The Employment interview: theory, research, and practice (pp. 127142).
Newbury Park: Sage.
Schmitt N. (1999). The current and future status of research on the employment interview.
In Eder RW, Harris MM (Eds.), The employment interview handbook (pp. 355368).
Thousand Oaks, CA: Sage.
Smither JW, Reilly RR, Millsap RE, Pearlman K, Stoffey RW. (1993). Applicant reactions
to selection procedures. PERSONNEL PSYCHOLOGY, 46, 4976.
Turban DB, Dougherty TW. (1992). Influences of campus recruiting on applicant attraction
to firms. Academy of Management Journal, 35(4), 739765.
Truxillo DM, Bauer TN, Campion MA, Paronto ME. (2002). Selection fairness information
and applicant reactions: A longitudinal field study. Journal of Applied Psychology,
87, 10201031.
Wanous JP, Reichers AE, Hudy MJ. (1997). Overall job satisfaction: How good are singleitem measures? Journal of Applied Psychology, 82(2), 247252.
Wiesner WH, Cronshaw SF. (1988). A meta-analytic investigation of the impact of interview
format and degree of structure on the validity of the employment interview. Journal
of Occupational Psychology, 61, 275290.