Hedge 2000

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 23

GROUPTeachout

Hedge, & ORGANIZATION


/ ACCEPTABILITY
MANAGEMENT
AND PERFORMANCE MEASURES

Exploring the Concept of


Acceptability as a Criterion for
Evaluating Performance Measures

JERRY W. HEDGE
Personnel Decisions Research Institutes, Inc.
MARK S. TEACHOUT
USAA

Although the value of an appraisal system traditionally has been judged according to reliability
and validity indexes, more recent discussion suggests that user acceptance may be critical to a
system’s successful implementation and continued use. This study examined the notion of using
acceptability as a criterion for evaluating performance appraisal techniques. Data analyses indi-
cated that motivation to rate, trust in others, and situational constraints were predictive of accept-
ability for both supervisors and job incumbents. In addition, differences in acceptability were
found across rating sources and rating forms, with supervisors’ perceptions more favorable than
job incumbents’ and a global rating form significantly less acceptable to all raters. Results are
discussed in terms of usefulness of an acceptability criterion in applied research.

Criterion measurement is essential for almost any personnel research appli-


cation. Still, choosing an adequate criterion or set of criteria remains a rela-
tively casual process. As a number of researchers (e.g., Kavanagh, 1982)
have noted, there are multiple criteria that can and should be used for judging
the quality of measurement instruments, procedures, and systems.
Over the years, researchers have discussed a variety of ways to assess the
quality of criterion measures. Bellows (1954) suggested that criterion mea-
sures should be reliable, realistic, representative, related to other criteria,
acceptable to the job analyst, acceptable to management, consistent from one
situation to another, and predictable. Blum and Naylor (1968) proposed that

The authors thank Christopher Earley and three anonymous reviewers for their helpful sugges-
tions. All statements expressed in this article are those of the authors and do not represent offi-
cial opinions of the U.S. Air Force. Correspondence concerning this article should be addressed
to Jerry W. Hedge, Personnel Decisions Research Institutes, Inc., 43 Main Street, S.E., Suite
405, Minneapolis, MN 55414.
Group & Organization Management, Vol. 25 No. 1, March 2000 22-44
© 2000 Sage Publications, Inc.
22

Downloaded from gom.sagepub.com at UCSF LIBRARY & CKM on December 11, 2014
Hedge, Teachout / ACCEPTABILITY AND PERFORMANCE MEASURES 23

criterion measures also should be inexpensive, understandable, measurable,


relevant, uncontaminated and bias-free, and discriminating. Bernardin and
Beatty (1984) compiled a large list of variables and clustered these variables
into three primary categories of criteria: quantitative (e.g., reliability, valid-
ity, discriminability), utilization (e.g., feedback, merit pay, adverse impact),
and qualitative (e.g., amount of documentation, user acceptability, mainte-
nance costs).
Although researchers periodically have called attention to the availability
of multiple criteria to judge performance measures, operational definitions of
these variables are not frequently supplied. If some form of empirical evalua-
tion of these measures is included, it typically is dominated by reliability and
validity considerations. Several researchers have attempted to apply a more
systematic process to the use of multiple criteria to judge performance
measures.
McAfee and Green (1977) described a set of 16 criteria they applied to aid
the selection of a performance appraisal method for use in a large Midwestern
hospital. Ten different appraisal methods were rated on criteria such as use-
fulness for counseling and employee development, expense to develop, reli-
ability, and freedom from psychometric errors. McAfee and Green then rated
each method on each criterion and used a weighted sum to identify the recom-
mended method for the job and organization under consideration.
Drawing on the work of McAfee and Green, Kavanagh (1980, 1982) pro-
posed a list of 19 criteria against which to judge the value of performance
appraisal systems. Each of these criteria was operationally defined and
included, for example, psychometric quality, developmental costs, user
acceptance, periodic review and feedback, meets Equal Employment Oppor-
tunity Commission (EEOC) guidelines, and susceptibility to inflation of rat-
ings. User acceptance or acceptability was seen as critical to the appraisal
system’s effect on employee motivation and management control. In the
remainder of this article, we examine more fully the concept of acceptability
and demonstrate how it may be used as a criterion to evaluate the worth of a
performance appraisal system or technique.

ATTITUDES ABOUT
PERFORMANCE APPRAISAL

Recent reviews of performance appraisal have emphasized a broader


focus on criteria. Murphy and Cleveland (1991) noted that the dominance of
psychometric and accuracy criteria has diverted researchers’ attention away
from three classes of criteria—reaction, practicality, decision process—that

Downloaded from gom.sagepub.com at UCSF LIBRARY & CKM on December 11, 2014
24 GROUP & ORGANIZATION MANAGEMENT

might be critical in determining the success of an appraisal system. They


argued that reaction criteria (such as perceptions of fairness and accuracy of
appraisal systems) may place a ceiling on the possible effectiveness of the
system because acceptance by raters and ratees may be necessary but not suf-
ficient for the appraisal system to be effective. A second set of criteria,
grouped under the heading of practicality, includes time commitment, cost,
political acceptability, and ease of installation. Third, decision process crite-
ria should be considered both in terms of the degree to which appraisal deci-
sions are accepted by members of the organization and the degree to which
decisions are facilitated by the performance appraisal system.
Similarly, Dickinson (1992) reviewed the literature on attitudes about per-
formance appraisal and suggested that if negative attitudes about appraisal
prevail among organizational members, performance appraisal will be unac-
ceptable to many members. Dickinson concluded that, if such attitudes exist,
use of the appraisal system may hinder rather than help the achievement of
desired organizational outcomes.

PERFORMANCE APPRAISAL
ACCEPTABILITY

In spite of the common sense logic that acceptance of a personnel proce-


dure is crucial to its effective use, it was not until 1967 that Lawler noted that
attitudes toward performance ratings could affect their validity. Lawler
(1967) proposed a model of the factors that affect the construct validity of rat-
ings. Central to the model was the belief that attitudes toward the equity and
acceptability of a rating system are a function of organizational and individ-
ual characteristics as well as of the rating format.
Landy, Barnes, and Murphy (1978) were among the first researchers to
examine attitudinal factors empirically as they relate to job performance
measurement. They identified four significant predictors of perceived fair-
ness and accuracy of performance appraisals: (a) frequency of appraisal, (b)
plans developed with the supervisor for eliminating weaknesses, (c) super-
visor’s knowledge of the ratee’s job duties, and (d) supervisor’s knowledge of
the ratee’s level of performance. In a follow-up study with the same popula-
tion, the level of the performance rating did not affect these relations (Landy,
Barnes-Farrell, & Cleveland, 1980).
Dipboye and de Pontbriand (1981) distinguished between employees’
opinions of their performance appraisal system and employees’ opinions of
the appraisal itself. They found that four factors related to the two dependent

Downloaded from gom.sagepub.com at UCSF LIBRARY & CKM on December 11, 2014
Hedge, Teachout / ACCEPTABILITY AND PERFORMANCE MEASURES 25

variables: (a) favorability of the appraisal, (b) opportunity for employees to


state their own perspective in the appraisal interview, (c) job relevance of
appraisal factors, and (d) discussion of plans and objectives with the
supervisor.
A series of studies by Kavanagh, Hedge, and colleagues extended the
examination of users’ perceptions of performance appraisal systems (Hedge,
1983; Kavanagh & Hedge, 1983; Kavanagh, Hedge, Ree, Earles, & DeBiasi,
1985). Although users’ attitudes toward the appraisal form and the broader
concept of the appraisal system did not seem to differ, several attitudes
toward the appraisal system were significant predictors of appraisal accept-
ability across studies. These included attitudes about whether (a) the
appraisal system facilitates fair and accurate appraisals, (b) the appraisal sys-
tem allows raters to distinguish between workers’ proficiencies, (c) the
appraisal system provides clear performance standards, (d) ratees receive sat-
isfactory feedback, and (e) ratees receive a satisfactory performance
evaluation.
Whereas the study by Kavanagh and Hedge (1983) and Kavanagh et al.
(1985) focused exclusively on factors related to acceptability, Hedge (1983)
used an acceptability measure (i.e., “How acceptable do you find your cur-
rent performance appraisal system?”) in conjunction with more traditional
performance appraisal criterion measures, to evaluate the implementation of
a new performance appraisal system at a large hospital. He discovered that
ratees found the new appraisal system more acceptable than the system previ-
ously in use.
Although Kavanagh (1980, 1982), Bernardin and Beatty (1984), and oth-
ers have discussed acceptability as an important criterion, it has rarely been
used. The objective of the present research was to focus on one criterion,
acceptability, that has been relatively underinvestigated, develop an opera-
tional definition for that variable, and follow a systematic procedure for col-
lecting and evaluating data related to user acceptability.
This study had four interrelated purposes. Because of the scarcity of
research focused on performance appraisal reaction criteria, our first purpose
was to extend the development of an acceptability criterion beyond what has
been done to date. Early research focused on single-item measures of per-
ceived fairness and accuracy (e.g., Landy et al., 1978; Landy et al., 1980).
Other researchers (Dobbins, Cardy, & Platz-Vieno, 1990; Giles &
Mossholder, 1990) chose satisfaction with appraisal as their single-item reac-
tion measure, arguing that a satisfaction criterion assesses both fairness cog-
nitions and affect, thus offering a broader indicator of appraisal reactions.
Dipboye and de Pontbriand (1981) used a three-item measure of satisfaction

Downloaded from gom.sagepub.com at UCSF LIBRARY & CKM on December 11, 2014
26 GROUP & ORGANIZATION MANAGEMENT

and understanding of the appraisal process and a four-item measure of


whether the appraisal system facilitates employee evaluation. Kavanagh et
al. (1985) focused on both the system and the form, broadening the concept to
emphasize overall acceptability, but once again used single-item measures.
Following the advice of Kavanagh (1982), Bernardin and Beatty (1984),
and Murphy and Cleveland (1991), we chose to focus on the construct of
acceptability as the measure that would best capture reactions to appraisal.
Building on previous research findings on reactions to appraisal, items were
written to reflect broadly the concept of acceptability, including: (a) facili-
tates identification of performance differences between employees, (b)
facilitates capturing the true picture of job performance, (c) overall accept-
ability of the form, (d) ease of form use and understanding, (e) facilitates con-
fidence in ratings, and (f) facilitates fair evaluation of performers.
A second purpose of the study was to examine the relation between per-
ceptions of appraisal acceptability and variables, both internal and external to
the appraisal process. Because of the validation research purpose of our
study, variables prevalent in previous appraisal reaction studies (e.g., setting
performance objectives, devising action plans, counseling employees, dis-
cussing salary issues) were irrelevant and thus not included. However, we
identified two appraisal process factors from the literature that seemed espe-
cially relevant to our study: rater trust and rater motivation.
Rater motivation largely has been ignored by performance appraisal
researchers. Although DeCotiis and Petit (1978) incorporated rater motiva-
tion as an important part of their model of the appraisal process, they cited
only Taft’s (1955) theory of interpersonal judgments as support for the inclu-
sion of this variable in their model. More recently, Bernardin and his col-
leagues (Bernardin & Cardy, 1982; Bernardin, Orban, & Carlyle, 1981)
focused on rater motivation, but only in terms of how it might be affected by
the level of trust a rater has in the appraisal system. Bernardin, Orban, &
Carlyle (1981) developed a measure they labeled “trust in the appraisal
process” and found that both trust and motivation were linked to the percep-
tions of fairness and accuracy of appraisal. Consequently, for the current
study, items were written to tap facets of these factors, including (a) general
rater motivation to rate, (b) motivation to rate accurately, (c) rater trust in the
appraisal process, (d) trust in other raters, and (e) trust in researchers.
We also identified three job variables previous literature had suggested
could affect raters’ ratings: situational constraints, supervisory support, and
job satisfaction. Peters and O’Connor (1980) hypothesized that constraints
on performance may lead to lower effectiveness levels, and some support has
been found for such a notion (e.g., O’Connor, Peters, Rudolf, & Pooyan,
1982; Olson & Borman, 1989). Extending this logic for raters, it was

Downloaded from gom.sagepub.com at UCSF LIBRARY & CKM on December 11, 2014
Hedge, Teachout / ACCEPTABILITY AND PERFORMANCE MEASURES 27

hypothesized that situational constraints may affect raters’ ability to observe


performance and rate accurately, thereby affecting perception of appraisal
acceptability; consequently, items were written to tap constraints related to
equipment availability and job manual availability and clarity.
Two other sets of items were also developed to examine the influence of
other external variables on performance appraisal acceptability. The two fac-
tors that appeared to have some relevance and that had been used in past
research studies were supervisory support and job satisfaction. Dickinson
(1992) noted that perhaps the single most important determinant of employee
attitudes about performance appraisal is the supervisor. He suggested that
when the supervisor is seen as trustworthy and supportive, then attitudes
about performance appraisal are favorable. In addition, Olson and Borman
(1989) found relations between supervisory support and job performance,
thus suggesting that supervisory support might relate to attitudes about per-
formance appraisal.
Similarly, although only modest relations have been found between job
satisfaction and job performance (e.g., Iaffaldano & Muchinsky, 1985; Pod-
sakoff & Williams, 1986), we felt it would be useful to explore whether atti-
tudes about the job would be related to attitudes about appraisal system
acceptability. For example, Giles and Mossholder (1990) found modest rela-
tions between job satisfaction and satisfaction with the appraisal system.
A third purpose of this study was to examine the link between rating
source and performance appraisal acceptability. Although previous studies
of performance appraisal attitudes have focused almost exclusively on ratee
reactions, the focus of our study was on the reactions of the raters to the forms
they had been asked to use. In addition, because both job incumbents and
supervisors were asked to provide performance ratings and responses to
other attitudinal questions, we were able to examine whether the variables
associated with appraisal acceptability differed by source, and whether levels
of appraisal acceptability differed by source.
A fourth purpose of our research was to examine whether rater acceptabil-
ity differed across rating forms. As noted previously, McAfee and Green
(1977) evaluated 10 appraisal methods against a list of 16 criteria as a way to
select an appraisal method for nurses in a hospital. Relying on their own
knowledge of the different methods, they rated the effectiveness of the meth-
ods on the 16 criteria and arrived at a final decision about which type of
appraisal method to use. Kavanagh (1982) also recommended that such a
procedure be used, but to our knowledge, no published study has gathered
attitudinal information (from the individuals who would be asked to use the
forms) as one component of the measurement method selection process.
Thus, a final aim of our research study was to collect data on rater attitudes

Downloaded from gom.sagepub.com at UCSF LIBRARY & CKM on December 11, 2014
28 GROUP & ORGANIZATION MANAGEMENT

TABLE 1
Principal Components Analysis of
the Job Incumbent Questionnaires

Factor Label and Item Loading

Motivation to rate accurately


1. Are you satisfied you made the most accurate ratings you could? .76
2. Did you make an “extra effort” to carefully pay attention to all of the
instructions and examples to make accurate ratings? .74
3. Did you care how accurate your ratings were? .72
4. Did you feel it was important to make accurate ratings? .71
5. Based on your experience, how important is it to you to make any
performance ratings you do as accurate as you can? .68
Job satisfaction
1. I feel that my job is interesting. .90
2. I am satisfied with my job. .85
3. I get a sense of accomplishment from my job. .83
4. I feel that my job is important to the overall mission of the Air Force. .67
5. I am able to use my skills and talents in my job. .65
Acceptability of rating form
1. If someone were to look at the ratings on the form, would they be able
to get a true picture of the performance level of the person being rated? .83
2. Would you be able to tell the difference between good and poor performers
by looking at the ratings they were given? .81
3. Overall, are the rating forms acceptable to you as a way to determine
job proficiency? .81
4. Do the rating forms evaluate your job proficiency fairly? .72
5. Are the rating forms easy to use and understandable as a means of
determining job proficiency? .67
6. Overall, did you feel confident about the ratings you made? .45
Situational constraints
1. The technical manuals and other written materials that I use in my job
are available when I need them. .86
2. The tools and equipment that I use in my job are available when I need them. .80
3. The technical manuals and other written materials that I use in my job
are clear and understandable. .46
Trust in other raters
1. Do you feel other persons involved really tried to follow the rules in
completing their ratings? –.75
2. Do you feel that other persons involved really cared about making accurate
ratings? –.72
3. Do you think other persons involved gave higher ratings than persons
deserved? .52
Supervisor support
1. I feel that my supervisor gives me the support I need to do my job. .96
2. I feel that my supervisor is concerned about my well-being. .95

Downloaded from gom.sagepub.com at UCSF LIBRARY & CKM on December 11, 2014
Hedge, Teachout / ACCEPTABILITY AND PERFORMANCE MEASURES 29

TABLE 1 Continued

Factor Label and Item Loading

Trust in appraisal process


1. Do you feel other persons were comfortable giving low ratings to
themselves or others? .73
2. Do you think your supervisor will have access to any information
about you collected from the rating forms? .64
General motivation to rate
1. How motivated were you to complete the rating forms? .73
2. Did you find the performance rating process interesting? .71
Trust in researchers
1. Do you believe that the ratings collected will be used for research
purposes only? .84
2. Do you believe that the true purpose of the ratings was the one
explained to you during the rater orientation? .71

about the acceptability of four separate performance appraisal forms that had
been developed for possible use in a validation project.
In summary, there is little empirical research concerning perceptions of
appraisal system acceptability. The purpose of this research was to identify
and clarify the construct of appraisal acceptability and examine factors
related to this acceptability construct. We also wanted to examine whether
attitudes about acceptability differ across rating sources and rating forms.

METHOD

STUDY DESIGN

The U.S. Air Force initiated a large-scale project to develop a variety of


job performance measures for use in validation of selection and classification
methodologies and the evaluation of training programs. Data collection
across the seven jobs included in the study occurred over a 3-year period. As
part of this program, performance rating scales were developed at four differ-
ent levels of specificity. To gauge users’ preferences for the four different
forms, data were gathered concerning these raters’ perceptions of the forms.
We defined users as any supervisors or job incumbents who might conduct a
performance evaluation, whether it be a self-evaluation, peer evaluation, or
supervisor evaluation. Rater perspective was categorized according to level

Downloaded from gom.sagepub.com at UCSF LIBRARY & CKM on December 11, 2014
30 GROUP & ORGANIZATION MANAGEMENT

in the organization; either job incumbent (i.e., self or peer) or supervisor. In


addition, a variety of rater background, trust, and situational constraint items
were included on two questionnaires. Development of items for the question-
naires was based on a review of the literature that helped to clarify the vari-
ables potentially descriptive of appraisal acceptability or relevant to that
construct.

PARTICIPANTS

Personnel from seven skilled trade and clerical jobs in the Air Force par-
ticipated in this research. These jobs included air traffic control (n = 332), air-
crew life support (n = 315), radio operator (n = 249), ground equipment
mechanic (n = 447), personnel (n = 358), precision measurement equipment
(n = 260), and avionic communications (n = 181). A total of 1,608 job incum-
bents in their first 4 years of military service and 534 supervisors completed
two attitude questionnaires and evaluated individual job performance using
four different rating scales.

QUESTIONNAIRES

Two attitude questionnaires were developed to gather information from


job incumbents and supervisors. Response options for all job attitude ques-
tionnaire items were based on a 5-point, adjectivally anchored graphic rating
scale, ranging from 1 (strongly disagree) to 5 (strongly agree); the rating atti-
tude questionnaire used a 5-point scale, ranging from 1 (not at all) to 5 (to a
very great extent).
The job attitude questionnaire included 10 items hypothesized to measure
three different constructs. Three items measured situational constraints (e.g.,
“The technical manuals and other written materials that I use in my job are
available when I need them”). Five items measured job satisfaction (e.g., “I
get a sense of accomplishment from my job”). Two items measured supervi-
sory support (e.g., “I feel that my supervisor gives me the support I need to do
my job”).
The rating attitude questionnaire contained 20 items hypothesized to
measure three constructs. Seven items measured the rater’s motivation to rate
(e.g., “How motivated were you to complete the rating forms?”). Following
earlier work by Bernardin, Orban, & Carlyle (1981), 7 items measured rater
trust in the appraisal process, in other raters, and in the researchers conduct-
ing the research (e.g., “Will your supervisor have access to any information
about you collected from the rating forms?”). Six items measured acceptabil-
ity of the appraisal process. The 6 acceptability items were designed to tap

Downloaded from gom.sagepub.com at UCSF LIBRARY & CKM on December 11, 2014
Hedge, Teachout / ACCEPTABILITY AND PERFORMANCE MEASURES 31

perceptions of appraisal form (a) fairness, (b) clarity of instructions, (c) con-
tributions to rating accuracy, (d) contributions to discrimination between
ratees, (e) overall acceptability to raters, and (f) confidence raters had in their
ratings. Tables 1 and 2 contain a complete description of the items written to
tap these constructs.

RATING FORMS

Four rating forms were developed to measure job performance. All rating
forms were constructed using a 5-point, adjectivally anchored rating scale (1 =
never meets acceptable level of proficiency; 2 = occasionally meets acceptable
level of proficiency; 3 = meets acceptable level of proficiency; 4 = frequently
exceeds acceptable level of proficiency; 5 = always exceeds acceptable level
of proficiency).
Task scales were developed to reflect a job-specific content domain. Tasks
identified as critical by a job analysis were included in each form. A set of
dimensional scales encompassed the domain of technical performance for
each job. These scales were developed by holding workshops with job
experts and resulted in behavior-based rating scales covering broad, within-
job competencies. Air Force–wide scales consisted of eight performance
dimensions descriptive of success across all Air Force jobs, including techni-
cal ability, initiative/effort, adherence to regulations, leadership, military
appearance, self-development, and self-control. Finally, a two-item set of
global scales consisted of an overall technical and an overall interpersonal
proficiency rating scale. Once again, a series of workshops with job experts
from each job was used to generate these behavior-based rating scales.

RATER TRAINING

A 1-hour frame-of-reference and rater error training program was devel-


oped to provide all raters with a general understanding of the four different
types of rating scales, and a frame of reference for how the scales were to be
used to rate job performance. Two exercises facilitated understanding and use
of the rating forms. Raters were asked to read scenarios written to describe
the job performance of hypothetical job incumbents (i.e., Airman Martin and
Airman Jones) and then rate performance based on the behaviors described in
the scenarios that they read. Each scenario was constructed using the specific
behaviors that anchored the behavior-based rating scales. For example, Air-
man Jones’s performance on certain tasks was described in a series of short
paragraphs containing “level of performance” phrases extracted from the rat-
ing scale anchors. Thus, the scores a rater should assign to these hypothesized

Downloaded from gom.sagepub.com at UCSF LIBRARY & CKM on December 11, 2014
32 GROUP & ORGANIZATION MANAGEMENT

TABLE 2
Principal Components Analysis of the Supervisor Questionnaires

Factor Label and Item Loading

Motivation to rate accurately


1. Based on your experience, how important is it to you to make any
performance ratings you do as accurate as you can? .88
2. Did you care how accurate your ratings were? .86
3. Did you make an “extra effort” to carefully pay attention to all of the
instructions and examples to make accurate ratings? .79
4. Based on your experience, how important is it to you to make any
performance ratings you do as accurate as you can? .68
5. Are you satisfied you made the most accurate ratings you could? .57
Job satisfaction
1. I get a sense of accomplishment from my job. .82
2. I am able to use my skills and talents in my job. .77
3. I am satisfied with my job. .69
Acceptability of rating form
1. Would you be able to tell the difference between good and poor
performers by looking at the ratings they were given? .86
2. If someone were to look at the ratings on the form, would they be able
to get a true picture of the performance level of the person being rated? .82
3. Overall, are the rating forms acceptable to you as a way to determine
job proficiency? .81
4. Do the rating forms evaluate your job proficiency fairly? .75
5. Are the rating forms easy to use and understandable as a means of
determining job proficiency? .73
6. Overall, did you feel confident about the ratings you made? .57
Supervisor support
1. I feel that my supervisor gives me the support I need to do my job. .92
2. I feel that my supervisor is concerned about my well-being. .89
Trust in other raters
1. Do you feel other persons involved really tried to follow the rules in
completing their ratings? .87
2. Do you feel that other persons involved really cared about making
accurate ratings? .84
Trust in researchers
1. Do you believe that the ratings collected will be used for research
purposes only? –.80
2. Do you believe that the true purpose of the ratings was the one
explained to you during the rater orientation? –.68
Situational constraints
1. The technical manuals and other written materials that I use in my job
are available when I need them. .79
2. The tools and equipment that I use in my job are available when
I need them. .64

Downloaded from gom.sagepub.com at UCSF LIBRARY & CKM on December 11, 2014
Hedge, Teachout / ACCEPTABILITY AND PERFORMANCE MEASURES 33

TABLE 2 Continued

Factor Label and Item Loading

3. The technical manuals and other written materials that I use in my job
are clear and understandable. .50
Trust in appraisal process
1. Do you think your supervisor will have access to any information about
you collected from the rating forms? .79
2. Do you feel other persons were comfortable giving low ratings to themselves
or others? .64
3. Do you think other persons involved gave higher ratings than persons
deserved? .50
Esprit de corps
1. I feel that my job is important to the overall mission of the Air Force. –.68
2. I get a sense of pride from being in the Air Force. –.64

job incumbents were prearranged, thereby allowing the trainer to provide


straightforward feedback to all raters and demonstrate how these rating
scales should be used to rate performance accurately. Consequently, once
each rating exercise was completed, raters received target score feedback
about their rating accuracy from the trainer.
In addition, a group exercise highlighted common rating errors, and sug-
gestions were made by the trainer on how to improve rating accuracy. This
written exercise depicted a conversation between four supervisors about their
employees and was read aloud by designated raters playing the roles of the
four supervisors. Once raters had completed reading the exercise, the trainer
pointed out certain errors in judgment scripted into the conversation, pro-
vided suggestions for how to avoid these problems, and reinforced the need to
make accurate ratings.

PROCEDURES

Group data collection sessions were held at 49 Air Force bases both within
the continental United States and abroad. Trained test site managers traveled
to each Air Force base and worked with individuals at each base to identify
available job incumbents, their supervisors, and coworkers. Supervisors and
job incumbents were then asked to attend a 4-hour training and testing ses-
sion at a designated time during the week. Because these sessions were con-
ducted across a 5- to 10-day period at each site, test site managers were able to
ensure virtually 100% participation from individuals selected to participate
in the study.

Downloaded from gom.sagepub.com at UCSF LIBRARY & CKM on December 11, 2014
34 GROUP & ORGANIZATION MANAGEMENT

When participants arrived at a testing session, they were introduced to the


purpose of data collection, participation requirements were explained, and
they were familiarized with each measure used in the project. This orienta-
tion session was followed by approximately 1 hour of frame-of-reference and
rater-error training.
Immediately following this group session, rating booklets were distrib-
uted, and raters were asked to complete all measures. The rating booklets
were organized such that each rater completed the job attitude questionnaire
followed by the global, dimensional, task, and Air Force–wide rating scales,
and then the rating attitude questionnaire. Supervisors were asked to rate up
to three job incumbents under their supervision. Job incumbents were asked
to rate themselves, up to three of their coworkers, or both. Thus, raters
(regardless of whether they were supervisors or job incumbents) were asked
to provide ratings on at least three job incumbents. Also, regardless of the
number of ratees evaluated, each rater completed the job attitude question-
naire and the rating attitude questionnaire only once.

RESULTS

Twenty-seven of the 1,608 job incumbent rating booklets and 12 of the


534 supervisor rating booklets were eliminated from the analyses due to
incomplete data. The remaining questionnaire data were analyzed to address
the four primary research initiatives noted previously, namely: (a) extend the
development of an acceptability criterion beyond what has been done to date,
(b) examine the relationship between perceptions of appraisal acceptability
and variables both internal and external to the appraisal process, (c) examine
the link between rating source and performance appraisal acceptability, and
(d) examine whether rater acceptability differed across rating forms.

FACTOR ANALYSES

The 10 job attitude questionnaire items, 6 acceptability items, 7 motiva-


tion to rate items, and 7 appraisal trust items were factor-analyzed separately
to clarify and refine the hypothesized constructs. Each analysis used the prin-
cipal components extraction technique, with orthogonal rotation of factors
having eigenvalues of 1.0 or greater to a Varimax solution. These factor
analyses were performed separately on supervisor and job incumbent data.
Following the separate questionnaire analyses, a higher order factor
analysis was performed to combine the four sets of factors into one general

Downloaded from gom.sagepub.com at UCSF LIBRARY & CKM on December 11, 2014
Hedge, Teachout / ACCEPTABILITY AND PERFORMANCE MEASURES 35

TABLE 3
Descriptive Statistics, Internal Consistencies,
and Intercorrelations for Job Incumbent Variables

Variable M SD 1 2 3 4 5 6 7 8 9

1. Motivation to rate
accurately 4.03 0.63 (.85)
2. Job satisfaction 3.73 0.77 .22 (.85)
3. Acceptability of
rating form 3.29 0.68 .42 .12 (.84)
4. Situational constraints 3.80 0.62 .19 .26 .17 (.56)
5. Trust in other raters 3.16 0.57 .31 .12 .30 .14 (.63)
6. Supervisor support 3.75 0.95 .17 .36 .11 .28 .13 (.88)
7. Trust in the appraisal
process 2.06 0.71 .00 .05 .10 –.04 .10 –.02 (.57)
8. General motivation
to rate 2.88 0.80 .52 .23 .40 .13 .27 .13 .04 (.71)
9. Trust in researchers 3.62 0.87 .39 .15 .36 .20 .30 .12 –.06 .35 (.69)

NOTE: Internal consistency reliability estimates appear in parentheses.

set of appraisal-related dimensions. Because factors such as acceptability,


trust in the appraisal process, and motivation to rate were hypothesized to be
intercorrelated, the higher order analysis employed a principal components
model with the direct oblimin method of oblique rotation. Once again, data
from supervisors and job incumbents were analyzed separately.
Tables 1 and 2 present loadings of variables on factors for job incumbents
and supervisors, respectively. Variables are ordered and grouped by size of
loading to facilitate interpretation. Loadings under .45 (20% of variance)
were excluded. Nine interpretable factors were identified for job incumbents
and supervisors, although the factors were not identical across the two
sources. The interpretable nine-factor solution for the job incumbent data set
included the following factors: (a) motivation to rate accurately, (b) job satis-
faction, (c) acceptability, (d) situational constraints, (e) trust in other raters,
(f) supervisory support, (g) trust in the appraisal process, (h) trust in research-
ers, and (i) general motivation to rate. The nine-factor solution from the
supervisor data set produced eight factors (factors a-h) in common with the
job incumbent set. The two rating sources did differ in two notable respects.
First, job incumbents saw general motivation to rate and motivation to rate
accurately as two separate constructs, whereas supervisors viewed the items

Downloaded from gom.sagepub.com at UCSF LIBRARY & CKM on December 11, 2014
36 GROUP & ORGANIZATION MANAGEMENT

TABLE 4
Descriptive Statistics, Internal Consistencies,
and Intercorrelations for Supervisor Variables

Variable M SD 1 2 3 4 5 6 7 8 9

1. Motivation to rate
accurately 4.34 0.58 (.84)
2. Job satisfaction 3.73 0.84 .09 (.84)
3. Acceptability of
rating form 3.44 0.69 .41 .02 (.87)
4. Supervisor support 3.67 0.97 .08 .32 .02 (.86)
5. Trust in other raters 3.49 0.73 .36 .12 .27 .12 (.82)
6. Trust in researchers 3.69 0.90 .37 .19 .35 .22 .32 (.72)
7. Situational constraints 3.74 0.63 .02 .19 .07 .18 .12 .06 (.65)
8. Trust in the appraisal
process 2.15 0.66 –.01 –.11 .17 –.16 –.01 –.01 .06 (.69)
9. Esprit de Corps 4.11 0.61 .28 .48 .23 .25 .11 .20 .20 –.04 (.65)

NOTE: Internal consistency reliability estimates appear in parentheses.

as representing one larger construct that combined both of these variables.


Second, unlike job incumbents, supervisors distinguished between job satis-
faction and esprit de corps, whereas job incumbents saw these items as one
larger job satisfaction variable. High loadings across data sets and factors
suggest relatively well-defined constructs. Tables 3 and 4 provide additional
descriptive, internal consistency, and intercorrelational data for these job
incumbent and supervisor variables.

REGRESSION ANALYSES

In an effort to identify factors predictive of acceptability, multiple regres-


sion analyses were conducted separately for supervisors and job incumbents.
Based on the previously derived factor solutions, an overall acceptability
dependent measure was formed by combining the six items loading on the
acceptability factor. Independent measures were formed in a like manner,
resulting in eight supervisor and eight job incumbent independent variables.
In addition, to assess the contribution of Air Force job type to variance in
the dependent measure, jobs were dummy-coded as an independent variable.
Thus, membership in a particular job type was specified by creating a cate-
gorical variable for the seven job types. The job variable was then forced into
the regression equation first, followed by the remainder of the independent

Downloaded from gom.sagepub.com at UCSF LIBRARY & CKM on December 11, 2014
Hedge, Teachout / ACCEPTABILITY AND PERFORMANCE MEASURES 37

TABLE 5
User Acceptability Regression Analysis
by Incumbent and Supervisor

Cumulative Adjusted
Factor b Multiple R Cumulative R2

Job Incumbent
Job .018 .055 .002
Motivation to rate accurately .200 .429 .184
Trust in researchers .221 .502 .251
General motivation to rate .193 .534 .283
Trust in other raters .111 .548 .300
Trust in the appraisal process .106 .557 .308
Situational constraints .088 .564 .314
Supervisor
Job .039 .077 .004
Motivation to rate accurately .236 .387 .146
Trust in researchers .173 .438 .187
Trust in other raters .160 .467 .210
Trust in the appraisal process .137 .487 .228
Esprit de corps .112 .503 .242
Situational constraints .091 .510 .248

NOTE: All independent variables (with the exception of “job”) listed in this table contributed
significantly to user acceptability.

variables entering in a forward inclusion manner. The results of the multiple


regression analysis are presented in Table 5.
For job incumbents, six factors were identified as predictors of acceptabil-
ity. These included motivation to rate accurately, trust in researchers, general
motivation to rate, trust in other raters, trust in the appraisal process, and
situational constraints. These six measures accounted for 31% of the variance
in acceptability. For supervisors, six factors were identified as predictors of
acceptability: motivation to rate accurately, trust in researchers, trust in other
raters, trust in the appraisal process, esprit de corps, and situational con-
straints, which accounted for 25% of the variance in the dependent measure.
In general, rater motivation, rater trust, and situational constraints on work
performance were significantly related to acceptability in both rater groups.
Supervisor support and job satisfaction variables did not account for appre-
ciable variance in acceptability, although supervisors did believe that feel-
ings of esprit de corps could influence attitudes about appraisal acceptability.

Downloaded from gom.sagepub.com at UCSF LIBRARY & CKM on December 11, 2014
38 GROUP & ORGANIZATION MANAGEMENT

TABLE 6
Rating Source × Rating Form Analysis of Variance

Factor DF MS F w2

Between subjects
Rating source (S) 1 492.96 9.22* .01
Subjects within group 2,101 53.46 —
Within subjects
Rating form (F) 3 286.26 51.99* .02
S×F 3 9.98 1.81
Subjects within groups 6,303 5.51 —

*p < .01.

Finally, the Air Force job was not an important factor in the variance of the
dependent measure (accounting for less than 1% of job incumbents’ and
supervisors’ acceptability).

ANALYSIS OF VARIANCE

To investigate differences in acceptability across rating sources and


forms, a Rating Source × Rating Form analysis of variance (ANOVA) was
computed, using the six-item acceptability variable as the dependent mea-
sure. Table 6 displays the results of this analysis.
The Rating Source main effect was found to be significant, F(1, 2102) =
9.22, p < .01, ω2 = .01, as was the Rating Form main effect, F(3, 6303) =
51.99, p < .01, ω2 = .02. Although the ANOVA identified the presence of a
significant effect, additional analyses were required to clarify the source of
those differences. Consequently, the Scheffé test for all possible differences
among means was used to identify mean differences in acceptability due to
rating source and type of rating form. For the source effect, we compared the
mean value of the job incumbent acceptability variable to the mean value of
the supervisor acceptability variable and found that supervisors were more
accepting of the measurement system than were job incumbents.
Similarly, we compared the mean acceptability value for each rating form
against all other rating forms using the Scheffé procedure to test for signifi-
cant differences between means and found that the global rating form was
considered less acceptable by all raters. The means for the task, dimensional,
and Air Force–wide rating forms, however, did not differ significantly among
themselves.

Downloaded from gom.sagepub.com at UCSF LIBRARY & CKM on December 11, 2014
Hedge, Teachout / ACCEPTABILITY AND PERFORMANCE MEASURES 39

DISCUSSION

This study examined the concept of rater acceptability as a criterion for


evaluating performance measures. We examined the construct of user accept-
ability, rater individual differences and contextual variables that may be
related to user acceptability, and differences in rater acceptance of four differ-
ent types of performance appraisal forms.
Factor analysis of the job attitude and rating attitude questionnaires identi-
fied a number of interpretable factors: rater acceptability, rater motivation,
job satisfaction, supervisory support, situational constraints, and rater trust.
Relatively comparable types of factors suggest that job incumbents and
supervisors structure their perceptions of these concepts similarly. Also, high
factor loadings across these factors indicate relatively stable, well-defined
constructs.
It is especially interesting to note the high factor loadings for all items
under the acceptability factor. To move beyond the more simplistic single-
item view of acceptability most prevalent in the literature, we generated items
that we believed conceptually represented a broad, multifaceted construct
involving perceptions of appraisal fairness, clarity of instruction, accuracy,
discriminability, and confidence. Consequently, it was promising to see these
six items collectively represent a strong acceptability factor; thus, our con-
ception of this acceptability factor appears to closely mirror the perceptions
of job incumbents and supervisors.
A number of years ago, Maier (1963) and Vroom and Yetton (1973), while
studying problem solving, suggested that acceptance or commitment on the
part of supervisors and subordinates is critical to effective implementation of
the decision. Although researching a different domain, these authors arrived
at similar conclusions about acceptance as did Lawler (1967) more than 30
years ago, when he emphasized the importance of attitudes toward appraisal
system equity and acceptability. More recently, Landy and his colleagues
(Landy et al., 1978; Landy et al., 1980) and Dipboye and de Pontbriand
(1981) operationalized Lawler’s notion, focusing on perceived fairness and
accuracy of the appraisal system and ratee attitudes about the usefulness of
the system and the process. We strongly encourage continued work in this
domain.
Regression analyses indicate that the same basic information influences
job incumbent and supervisor acceptability of the appraisal process: rater
motivation, rater trust, and situational constraints. Rater motivation and trust
are variables internal to the appraisal process. As Bernardin and his col-
leagues (Bernardin & Cardy, 1982; Bernardin, Orban, & Carlyle, 1981)

Downloaded from gom.sagepub.com at UCSF LIBRARY & CKM on December 11, 2014
40 GROUP & ORGANIZATION MANAGEMENT

noted, individual rater motivation and trust in the appraisal process may be
strongly linked to perceived accuracy and fairness in appraisal. Our results
empirically support the link between these appraisal process variables and
acceptability and suggest that organizations should strive to create conditions
for motivation and trust in the appraisal process.
External work impediments also seem to affect perceptions of system
acceptability. Evidently, raters believe that problems with equipment and
technical manual availability, as well as technical manual clarity, may inter-
fere with ratee proficiency or at least raters’ judgments of that performance.
Previously, Peters and O’Connor (1980) suggested that situational con-
straints affect job performance. Our findings suggest that constraints also
may interfere with rater ability to judge job proficiency fairly, accurately, and
confidently.
The ANOVA and subsequent comparisons of means identified differences
in levels of acceptability, with the global ratings significantly less acceptable
to all raters. These results raise questions about the relative usefulness of such
broad, overall ratings and warrant further investigation. Evidently, raters dis-
like rating individuals on a few, broad categories of performance. Perhaps
raters feel that their input is restricted when they are limited to only two rating
categories, or possibly they feel that a reduced set of categories limits their
ability to characterize accurately ratee performance.
Mean differences were also found between rating sources, with supervisor
perceptions of the appraisal system more favorable than incumbent percep-
tions. Why was the appraisal system more acceptable to supervisors than it
was to job incumbents? Perhaps supervisor familiarity with the rating
process might affect acceptability. Because of the nature of their jobs, super-
visors have much more experience rating performance than do job incum-
bents and, therefore, might be more likely to understand and accept the
process. Another possibility could be that because job incumbents have spent
their careers being the target of ratings, they are more skeptical of the process,
and their perceptions of rating acceptability are lower, in general.

PRACTICAL IMPLICATIONS

Research on the measurement of job performance has been prominent in


the industrial/organizational psychology literature for many years. Most of
this work has used validity, reliability, and rating error measures. As various
authors have noted, however (e.g., Bernardin & Beatty, 1984; Hedge & Bor-
man, 1995; Kavanagh, 1982), there are multiple criteria that can be used for
judging the quality of measurement instruments, procedures, and systems.
Acceptability is a relatively uninvestigated criterion. This research has

Downloaded from gom.sagepub.com at UCSF LIBRARY & CKM on December 11, 2014
Hedge, Teachout / ACCEPTABILITY AND PERFORMANCE MEASURES 41

attempted to clarify the concept of acceptability and to identify factors


related to acceptability. We believe that a rater acceptability criterion can be
used to contribute valuable information about the worth of a particular mea-
surement instrument or an appraisal system and should be used in conjunc-
tion with other, more frequently used appraisal criteria.
Much debate has occurred over the past several decades, with advocates of
different types of appraisal systems making claims about the psychometric
superiority of their preferred system. We believe, however, that factors such
as motivation and trust may limit, to a large degree, overall appraisal system
effectiveness. It may seem overly simplistic to suggest that a system is only as
good as the people who use it. Still, systems work best when users are moti-
vated and trusted. As Bernardin and Beatty (1984) suggested, before invest-
ing a great deal of time and money in the latest tool or technique, maybe the
practitioner should first examine the organizational climate surrounding per-
formance appraisal.
Jacobs, Kafry, and Zedeck (1980), in an examination of the behaviorally
anchored rating scale (BARS) literature, noted their own disappointing expe-
riences with organizations abandoning recently developed appraisal sys-
tems. They suggested that many organizations revert back to evaluation sys-
tems in use prior to intervention because of organization policy and the
excessive personnel time and energy requirements associated with BARS.
These frustrations, in a very applied way, speak to the issue of acceptabil-
ity and suggest the importance of including this variable as a criterion when
evaluating worth of an appraisal system. After all, if a psychometrically
sound system is developed but is unacceptable to its users, it might be used
improperly, or it may never be used.
Banks and Murphy (1985) noted that raters must not only be capable, but
they must also be willing to provide accurate ratings. Thus, motivation to rate
accurately (or lack thereof) can have a major impact on the usefulness of a
new system. In turn, one factor associated with rater motivation may be the
trust individual raters have in an appraisal process. Consequently, trust in the
appraisal process should be examined early on and, if found to be low, should
be one of the first problems confronted by the organization (see Bernardin &
Beatty, 1984). Providing eventual users with knowledge of, and involvement
in, the planned system prior to implementation may be a good first step to
building trust, motivation, and acceptance.
By way of broad, prescriptive advice, time and effort on the part of man-
agement and nonmanagement personnel are required to ensure that the
organization’s culture is conducive to implementation of new systems. Cer-
tainly, there is no substitute for making the necessary, and necessarily long-
term investments of building trust and motivating people. Organizations

Downloaded from gom.sagepub.com at UCSF LIBRARY & CKM on December 11, 2014
42 GROUP & ORGANIZATION MANAGEMENT

should strive to foster conditions for motivation and trust; and, more relevant
to the current discussion on performance appraisal, user orientation and
training strategies should be helpful in this regard.
It is important to note that the empirical research presented here was con-
ducted within a single organizational environment, that of the military, and it
is conceivable that the results do not generalize to all settings. From our per-
spective, it would seem likely that the results may be even more dramatic in a
less regimented culture. Still, the framework presented in this study repre-
sents an initial integrative effort aimed not only at increasing understanding
of user acceptability but also at highlighting the potential importance of user
perceptions and reactions to organizational systems, tools, and techniques.
Our perspective is that the researcher and practitioner should view psy-
chometric considerations as necessary but not sufficient criteria to evaluate
the quality of performance appraisal. Consequently, although psychometrics
should not be dismissed in the development and evaluation of a performance
appraisal system, other criteria also must be considered. It is the basic prem-
ise of this article that, in terms of successful use of a performance appraisal
system, user acceptance of the system may be the critical criterion.

REFERENCES

Banks, C. G., & Murphy, K. R. (1985). Toward narrowing the research-practice gap in perfor-
mance appraisal. Personnel Psychology, 38, 335-345.
Bellows, R. M. (1954). Psychology of personnel in business and industry (2nd ed.). Englewood
Cliffs, NJ: Prentice Hall.
Bernardin, H. J., & Beatty, R. W. (1984). Performance appraisal: Assessing human behavior at
work. Boston: Kent.
Bernardin, H. J., & Cardy, R. L. (1982). Appraisal accuracy: The ability and motivation to
remember the past. Public Personnel Management Journal, 119, 352-357.
Bernardin, H. J., Orban, J. A., & Carlyle, J. J. (1981). Performance ratings as a function of trust in
appraisal, purpose for appraisal, and rater individual differences. Proceedings of the Acad-
emy of Management, 311-315.
Blum, M. L., & Naylor, J. C. (1968). Industrial psychology. New York: Harper & Row.
DeCotiis, T., & Petit, A. (1978). The performance appraisal process: A model and some testable
propositions. Academy of Management Review, 3, 635-646.
Dickinson, T. D. (1992). Attitudes about performance appraisal. In H. Schuler, J. L. Farr, and
M. Smith (Eds.), Personnel selection and assessment: Individual and organizational per-
spectives (pp. 141-161). Hillsdale, NJ: Lawrence Erlbaum.
Dipboye, R., & de Pontbriand, R. (1981). Correlates of employee reactions to performance
appraisals and appraisal systems. Journal of Applied Psychology, 66, 248-251.
Dobbins, G. H., Cardy, R. L., & Platz-Vieno, S. J. (1990). A contingency approach to appraisal
satisfaction: An initial investigation of the joint effects of organizational variables and
appraisal characteristics. Journal of Management, 16, 619-632.

Downloaded from gom.sagepub.com at UCSF LIBRARY & CKM on December 11, 2014
Hedge, Teachout / ACCEPTABILITY AND PERFORMANCE MEASURES 43

Giles, W. F., & Mossholder, K. W. (1990). Employee reactions to contextual and session compo-
nents of performance appraisal. Journal of Applied Psychology, 75, 371-377.
Hedge, J. W. (1983, August). A focus on global measures of appraisal system success/failure.
Paper presented at the annual meeting of the Academy of Management, Dallas, TX.
Hedge, J. W., & Borman, W. C. (1995). Changing conceptions and practices in performance
appraisal. In A. Howard (Ed.), The changing nature of work (pp. 451-481). San Francisco:
Jossey-Bass.
Iaffaldano, M. R., & Muchinsky, P. M. (1985). Job satisfaction and job performance: A meta-
analysis. Psychological Bulletin, 97, 251-273.
Jacobs, R., Kafry, D., & Zedeck, S. (1980). Expectations of behaviorally anchored rating scales.
Personnel Psychology, 33, 595-640.
Kavanagh, M. J. (1980, April). Criteria for the evaluation of performance measurement tech-
niques and performance systems. Paper presented at the First Annual Scientist-Practitioner
Conference in Industrial/Organizational Psychology, Virginia Beach, VA.
Kavanagh, M. J. (1982). Evaluating performance. In K. M. Rowland & G. R. Ferris (Eds.), Per-
sonnel management (pp. 187-226). Boston: Allyn & Bacon.
Kavanagh, M. J., & Hedge, J. W. (1983, May). A closer look at correlates of performance
appraisal system acceptability. Paper presented at the annual meeting of the Eastern Acad-
emy of Management, Pittsburgh, PA.
Kavanagh, M. J., Hedge, J. W., Ree, M., Earles, J., & DeBiasi, G. L. (1985, May). Clarification
of some issues in regard to employee acceptability of performance appraisal: Results from
five samples. Paper presented at the annual meeting of the Eastern Academy of Management,
Albany, NY.
Landy, F. J., Barnes, J. R., & Murphy, K. R. (1978). Correlates of perceived fairness and accuracy
of performance evaluation. Journal of Applied Psychology, 63, 751-754.
Landy, F. J., Barnes-Farrell, J. R., & Cleveland, J. N. (1980). Perceived fairness and accuracy of
performance evaluation: A follow-up. Journal of Applied Psychology, 65, 355-356.
Lawler, E. E. (1967). The multitrait-multirater approach to measuring managerial job perfor-
mance. Journal of Applied Psychology, 51, 369-381.
Maier, N.R.F. (1963). Problem-solving discussions and conferences: Leadership methods and
skills. New York: McGraw-Hill.
McAfee, B., & Green, B. (1977). Selecting a performance appraisal method. The Personnel
Administrator, 22, 61-64.
Murphy, K. R., & Cleveland, J. (1991). Performance appraisal: An organizational perspective.
Boston: Allyn & Bacon.
O’Connor, E. J., Peters, L., Rudolf, C. J., & Pooyan, A. (1982). Situational constraints and
employee affective reactions: A field replication. Group & Organization Studies, 7,
418-428.
Olson, D. M., & Borman, W. C. (1989). More evidence on relationships between the work envi-
ronment and job performance. Human Performance, 2, 113-130.
Peters, L. H., & O’Connor, E. J. (1980). Situational constraints and work outcomes: The influ-
ence of a frequently overlooked construct. Academy of Management Review, 5, 391-397.
Podsakoff, P. M., & Williams, L. J. (1986). The relationship between job performance and job
satisfaction. In E. A. Locke (Ed.), Generalizing from laboratory to field settings (pp. 207-253).
Lexington, MA: Lexington Books.
Taft, R. (1955). The ability to judge people. Psychological Bulletin, 52, 1-23.
Vroom, V. H., & Yetton, P. W. (1973). Leadership and decision-making. Pittsburgh, PA: Univer-
sity of Pittsburgh Press.

Downloaded from gom.sagepub.com at UCSF LIBRARY & CKM on December 11, 2014
44 GROUP & ORGANIZATION MANAGEMENT

Jerry W. Hedge is President and Chief Operating Officer of Personnel Decisions


Research Institutes, Inc., Minneapolis, MN, where his responsibilities include coordinat-
ing business development and research efforts and overseeing the day-to-day operation
of the organization. He also remains active in conducting individual research projects.
He received his doctorate in industrial/organizational psychology from Old Dominion
University in 1982.

Mark S. Teachout is the Executive Director of Learning & Performance Technology at


USAA in San Antonio, TX. He received his Ph.D. in industrial/organizational psychology
from Old Dominion University in 1990 and has published articles on performance
appraisal, training, and training evaluation.

Downloaded from gom.sagepub.com at UCSF LIBRARY & CKM on December 11, 2014

You might also like