Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

85

Journal of Occupational and Organizational Psychology (2013), 86, 85–99


© 2012 The British Psychological Society
www.wileyonlinelibrary.com

Developing a measure of work uncertainty


Desmond Leach1*, Gareth Hagger-Johnson2, Nadin Doerner3,
Toby Wall1, Nick Turner4, Jeremy Dawson5 and Gudela Grote6
1
Leeds University Business School, UK
2
University College London, UK
3
University of St. Gallen, Switzerland
4
University of Manitoba, Canada
5
University of Sheffield, UK
6
Swiss Federal Institute of Technology, Switzerland

Uncertainty is a key contingency in the relationship between work characteristics and


outcomes such as employee performance and well-being. In this paper, we specify and test
a self-report measure of work uncertainty for use in any setting to facilitate research and
decision making regarding the design of work. Using data collected from three diverse
samples, analyses found support for a multi-dimensional model that corresponds to
resource, task, and input/output sources of uncertainty. The scales showed discriminant
validity, and task uncertainty was found to moderate the relationship between job control
and intrinsic job satisfaction in a form consistent with theoretical predictions.

Practitioner Points
 The self-report measure of work uncertainty may be used to evaluate existing work
design and facilitate its redesign.
 As research demonstrates, it is critical that the level of job control afforded to employees
is congruent with the level of uncertainty they experience. Failure to consider the role
of uncertainty in linking job control to outcomes (e.g., performance, well-being) can
undermine work redesign investment.

Interest in the concept of uncertainty is long-standing and spans several domains of


inquiry, including organizational theory (Burns & Stalker, 1961; Lawrence & Lorsch,
1967), human factors (Clegg, Ravden, Corbett, & Johnson, 1989), total quality
management (Douglas & Judge, 2001; Sitkin, Sutcliffe, & Schroeder, 1994), decision
making (Atuahene-Gima & Li, 2004; Hult, Craighead, & Ketchen, 2010), human resource

*Correspondence should be addressed to Desmond Leach, Leeds University Business School, Maurice Keyworth Building, Leeds
LS2 9JT, UK (e-mail: djl@lubs.leeds.ac.uk).

DOI:10.1111/joop.12000
86 Desmond Leach et al.

management (Datta, Guthrie, & Wright, 2005; Wright & Snell, 1998), and occupational
health and safety (Grote, 2007; Jackson, 1989). This article concerns the development of a
widely applicable measure to assess uncertainty in relation to job characteristics and the
broader work context, which we term work uncertainty. In the following sections, we
define work uncertainty, discuss its importance to the design of work, and consider
measurement issues.

Uncertainty and work design


A guiding perspective on uncertainty in organizational theory argues that under stable or
predictable conditions, mechanistic structures (e.g., routinized tasks, centralized decision
making) are appropriate, but that in unpredictable or uncertain environments, organic
structures (e.g., greater flexibility, decentralized decision making) are necessary
(Burns & Stalker, 1961). Particular sources of uncertainty that job incumbents might
experience include those that originate from technology, materials, or customer demands
(Cummings & Blumberg, 1987; Perrow, 1967). Whatever the source, uncertainty or a ‘lack
of predictability in work tasks and requirements’ (Wall, Cordery, & Clegg, 2002, p. 151)
represents a key contingency or moderator in work design theory (Parker, Wall, &
Cordery, 2001). In more detail, Wall et al. (2002) argue that at higher levels of uncertainty,
increasing job control (i.e., an increase in decision-making authority over primary task
execution) is appropriate because it provides an opportunity for employees to learn about
tasks and requirements, leading to improved job performance. Conversely, when
uncertainty is low, increasing control is likely to have little effect; there are, by definition,
fewer problems that require attention and therefore little scope for learning, or for the
exercise of discretion in ways that would enhance job performance. This reasoning
suggests that long-held assumptions concerning enhanced job control or empowerment
that it is ‘advocated as a near-universal recipe for organizational success ... is incorrect’
(Wall et al., 2002, p. 148).
The contextual significance of uncertainty is also clearly stated within Griffin, Neal,
and Parker’s (2007) model of work-role performance. Their analysis of uncertainty is akin
to that of Wall et al.’s (2002) in that they propose it determines or shapes the degree of
work-role formalization. When uncertainty is high, Griffin et al. (2007) argue that job
descriptions that specify tasks or procedures are not appropriate due to difficulty in
anticipating the nature of work demands or the number of exceptional events. Under such
conditions, they maintain that ‘work roles must emerge dynamically in response to
changing conditions and demands’ (p. 329). Adaptability and proactivity, they reason, are
work behaviours of prime importance to this emergent process and, in turn, to work
performance. When the work environment is more stable and predictable, however,
Griffin et al. (2007) assert that more defined, formal work roles will help to ensure goal
accomplishment: work performance is maintained by compliance with formal require-
ments of the work role.
Field studies provide some support for such theoretical propositions. For example,
Wall, Corbett, Martin, Clegg, and Jackson (1990) investigated the effect of enhanced job
control (i.e., basic duties of loading/unloading and monitoring machines were extended
to include adjustment of mechanisms, preventative maintenance, etc.) on the perfor-
mance of computer-controlled assembly machines. Following the change, a significant
improvement in performance from reduced downtime was recorded for high-variance
machines, namely machines that were prone to operational problems caused by ‘the
delicacy of components … variability in component specifications … and the [unreliable]
Work uncertainty 87

nature of mechanisms’ (p. 693), but not for low-variance ones (for complementary
findings see Cordery, Morrison, Wright, & Wall, 2010). Furthermore, Wright and
Cordery (1999) found a positive relationship between job control and job attitudes
(intrinsic job satisfaction and motivation) at higher levels of uncertainty and a negative
relationship at lower levels. For those employees given high control but under low
uncertainty conditions, the researchers concluded: ‘Employees … were given clear and
strong expectations by management that they would be called on to exercise a high level
of problem solving responsibilities with respect to day-to-day production. … Employees
expecting to make considerable use of their skills and abilities with respect to the
job found few opportunities to do so, leading to resentment and demoralization’
(p. 461).

Uncertainty and measurement issues


Within work design research, the means adopted to examine uncertainty are often
setting-specific, such as in the Wall et al. (1990) study discussed earlier. Of relevance to
the present study, though, are Mullarkey, Jackson, Wall, Wilson, and Grey-Taylor (1997)
self-report measures of technological uncertainty and technological abstractness.
Although these measures provided some guidance in creating our items, they were
deemed too narrow to warrant inclusion in the analysis because they concern specific
aspects of machine operation, including fine-tuning frequency and the need for
adjustment. It is measurement issues such as these that highlight the need for
standardized measures of uncertainty for use in work design research and practice (cf.
Jackson, Wall, Martin, & Davids, 1993). Measures of this kind would provide a consistent
means to assess uncertainty, enabling the development of normative data for bench-
marking purposes and predictions about the design of work in any particular setting.
Accordingly, our aim is to develop items that relate to common work factors, such as
equipment reliability and task predictability. The uncertainty construct, however, is
broad; the sources of ‘not knowing for sure’ (Grote, 2009, p. 11) at work are diverse, of
which some will be unique to a specific setting and thus require bespoke measurement
(Pollard, 2001; Song & Montoya-Weiss, 2001). It is important, therefore, to note that our
endeavour concerns development of the means to permit examination of theoretical
assumptions beyond established – typically industrial – contexts (cf. Grant & Parker,
2009) rather than to capture the construct in its entirety.
Although uncertainty has been found to moderate relationships between job control
and outcomes, it has been argued to covary with job control: enhanced responsibility for
task management increasing the degree of uncertainty experienced by employees
(Slocum & Sims, 1980). Process clarity (Sawyer, 1992), or the extent to which employees
are clear about how to accomplish work tasks, might also capture the experience of
uncertainty. For construct validation purposes, however, we propose that such concepts
are distinct from work uncertainty. Based on Wright and Cordery (1999), it is plausible
that perceptions of job control and clarity are determined by managerial practice and
uncertainty by inherent aspects of the work context (e.g., technological variance). For
instance, an individual might have decision-making authority but still experience
uncertainty, say, in regard to the order in which tasks are performed.
A final point is in order. Given that research into work design typically involves a large
number of variables, as a result of both theoretical advancement and development of
multivariate data analytic techniques (such as structural equation modelling), there is a
practical need for any new measures to be as short as possible.
88 Desmond Leach et al.

The present study


Our approach to the development of a measure of work uncertainty involved four stages
(cf. Chen, Gully, & Eden, 2001; Hinkin, 1998). In the first, we generated a pool of items and
examined face and content validity to inform item selection. The second stage involved an
examination of factor structure using exploratory and confirmatory factor analyses, and
we conducted multiple group/sample confirmatory factor analysis (CFA) to evaluate the
generalizability of the factor structure across two contrasting samples. Internal consis-
tency was also examined to determine the reliability of the measure along with its
discriminant validity. In stage three, analysis sought to verify that the factor structure
identified was not due to positive/negative item-wording effects. Finally, stage four
involved an initial test of the predictive validity of the measure.

STAGE 1
Within the work design literature, uncertainty typically relates to factors that affect the
completion of work tasks. The discussion of empirical studies above illustrates the role of
technological (reliability) and operational (complexity) factors in this regard. In the
literature more generally, there is much discussion of these particular factors or sources of
uncertainty (Argote, 1982; Cherns, 1976; Cummings & Blumberg, 1987; Parker & Wall,
1998; Perrow, 1967, 1970; Slocum & Sims, 1980), although the terminology is not
consistent. Additionally, the supply of information (cf. Lawrence & Lorsch, 1967) and the
supply of materials (Cherns, 1976) represent key forms of uncertainty with much
potential to affect task execution: an unreliable supply of both will inevitably affect work
outcomes. Collectively, these origins underpin Parker et al.’s (2001) account of
uncertainty, forming a core contingency variable in their theoretical explication of work
design and therefore were used to guide our efforts in item generation (Hinkin, 1998).
Working individually, four of the authors initially created a pool of items that
corresponded to technology reliability, task complexity, and consistency of information
and materials supply. Via consensus, the items that were judged to lack generalizability or
difficult to understand (i.e., poorly worded, ambiguous) were excluded (Bennett &
Robinson, 2000; Jackson et al., 1993). Furthermore, items that we considered
would invoke response bias due to potential ‘face saving’ reactions (e.g., ‘Do you lack
knowledge about …?’) were also excluded. This selection process produced 20 items. To
capture perceived levels of uncertainty, a 5-point response scale was used from ‘Rarely or
never’ (1), ‘Occasionally’ (2), ‘Often’ (3), ‘Very often’ (4), and ‘Constantly’ (5).

Content validity
Of the 20 items, we deleted four because of their focus on supervisor/colleague support
rather than on the uncertainty construct. Following Chen et al.’s (2001) approach, we
also asked 12 organizational psychologists to assess the content validity of the items. They
were provided with a definition of work uncertainty and were asked to indicate whether
or not the items encapsulate the uncertainty construct or some other construct and to
assess the overall cohesiveness of the items. Seven items were judged to lack suitability by
the majority of the assessors (at least 9/12). Four of these items were judged to capture job
control rather than the uncertainty construct (e.g., ‘Do you take on unexpected tasks?’,
‘Do you have to overcome equipment problems?’), one item was deemed to be on the
periphery of what most employees could be expected to answer (‘Are the requirements of
Work uncertainty 89

your external suppliers consistent?’), and the other items were judged to be difficult to
comprehend and answer accurately (e.g., ‘Does the equipment you use almost always
work properly?’). Nine items, therefore, were selected for further analysis.

STAGE 2
In this stage, exploratory factor analysis (EFA, using principal component analysis) was
initially used to establish the number of factors required to explain the correlations among
the set of nine items (Tabachnick & Fidell, 2007), which involved sample 1 data (see
below). CFA was then conducted on sample 2 to provide a more rigorous examination of
the data (Hinkin, 1998), testing the goodness of fit between our specified model and the
data (Brown, 2006) using a structural equation modelling approach. We then re-
analysed sample 1 using the same CFA to permit tests of equivalence in factor structure
across the samples (i.e., multiple sample analysis).
Finally, we tested whether the uncertainty measure is distinct from process clarity
(Sawyer, 1992) and job control ( Jackson et al., 1993) using sample 2. The former refers
to the extent to which employees are clear about how to accomplish work tasks and
the latter concerns decision-making responsibility.

Method
Samples and procedure
The first sample comprised shop floor employees who worked at a moulding
manufacturing plant in Ontario, Canada. A questionnaire survey was completed during
normal work hours. Employees were informed that all responses would remain
confidential to the research team. The sample consisted of 203 full-time employees
(65% response rate) of whom 93% were male. The mean age was 35.87 years (SD = 9.91),
with a mean organizational tenure of 8.19 years (SD = 7.96). Participants were asked to
state their highest level of educational qualification, which was rank-ordered using a 10-
point scale: 1–5 (Grade 8 and under to Grade 12), 6 (Grade 13/high school), 7 (college), 8
(bachelor’s degree), 9 (master’s degree), and 10 (doctoral degree). The mean qualification
level was 5.02 (SD = 1.36). The second sample consisted of 147 US and Canadian
employees of whom 61.4% were female. In contrast to sample 1, these participants
completed a Web-based questionnaire survey (via a commercially purchased e-mail
service, www.studyresponse.com) and represented a range of occupations across a range
of employment sectors. The mean age of this sample was 43.53 years (SD = 10.37), and
the mean tenure was 8.23 years (SD = 7.56).

Measures
Process clarity was measured using three items from Sawyer’s (1992) measure of role
clarity: ‘How to go about getting my job done’, ‘Whether the procedures I use to do my job
are correct and proper’, and ‘How to divide my time among the tasks I do in my job’
(a = .77). The response options were ‘Very unclear’ (1) to ‘Very clear’ (6).
Job control was assessed using four items based on Jackson et al. (1993): ‘Do you
decide on the order in which you do things?’, ‘Do you decide when to start the next job?’,
‘Do you decide when to finish a piece a work?’, and ‘Do you plan your own work?’
90 Desmond Leach et al.

(a = .86). Responses were recorded on a 5-point response scale from ‘Not at all’ (1) to ‘A
great deal’ (5).

Results
Exploratory analysis
Principal component analysis with varimax rotation was performed to identify the
number of components that would explain the covariance among the nine items
(sample 1 data). Three components were identified, using the scree criterion, and
accounted for 72.73% of the cumulative variance. The first component refers to the
means to execute tasks effectively (e.g., equipment reliability, availability of informa-
tion), which we term resource uncertainty, and the second relates to complexity or task
uncertainty (e.g., variability of tasks, unexpected problems). The third component
concerns the individuals who supply information, materials etc. and to whom, say, a
service is supplied and whose demands may affect the work of the individual. This
contextualizes the respondent/employee as both a receiver and a giver of information or
materials or a service, which we label input/output uncertainty. The criteria for
retaining items were a moderate or high loading on the appropriate component, with
loadings at .30 or lower on the other two components (Hair, Anderson, Tatham, & Black,
1998). If an item loaded higher than .30 on two components, it was retained if the
loading on the appropriate component was at least twice the size of the cross-loading
(Podsakoff, Ahearne, & MacKenzie, 1997). Component factor loadings are shown in
Table 1, with loadings greater than .40 highlighted in bold. Scores created for each
component had acceptable internal consistency: a = .76, .79, .87, respectively.

Confirmatory analysis
As identified in EFA for sample 1, the three-factor solution, comprising three items each,
was an excellent fit to the data by several criteria in sample 2 (v2 = 45.94, df = 24,
p = .01; CFI = 0.97, TLI = 0.95, RMSEA = 0.09) and also in sample 1 (v2 = 46.42,
df = 24, p = .004; CFI = 0.98, TLI = 0.96, RMSEA = 0.06) and is shown in Figure 1,
which includes the items and factor loadings. By comparing a model in which
all parameters were free to vary across both samples (v2 = 92.36, df = 48,
RMSEA = 0.07), with one in which the factor loadings were held equal (v2 = 99.81,
df = 54, RMSEA = 0.07), the invariance of the factor loadings across samples was
demonstrated.

Discriminant validity
Confirmatory factor analysis was conducted to test whether the subscales of uncertainty
are distinct from clarity and job control (Anderson & Gerbing, 1988; Chen et al., 2001).
Specifically, the fit of three different models was compared: (1) a five-factor model in
which resource uncertainty, task uncertainty, input/output uncertainty, clarity, and job
control were independent of one another; (2) a three-factor model in which the three
subscales of uncertainty were set to correlate with one but were independent of clarity
and job control, and (3) a one-factor model in which the five variables were set to correlate
with 1. The five-factor solution provided a better fit to the data (v2 = 116.26, df = 90,
p = .03; CFI = 0.98, TLI = 0.97, RMSEA = 0.05) than the three-factor solution
Work uncertainty 91

Table 1. Principal component analysis for nine items in sample 1

Sample 1
Manufacturing sample

Resource Task Input/output

1. Does the equipment you use work reliably? .69 .05 .27
2. Is the supply of materials you need to do .83 .04 .18
your job well consistent?
3. Is the supply of information you need to do .75 .13 .31
your job consistent?
4. Do your tasks vary on a day-to-day basis with .16 .80 .00
no or littlea warning?
5. Do you come across unexpected problems in .01 .86 .08
your work?
6. Does the order in which you do tasks change .05 .87 .07
with no or littlea warning?
7. Can you rely on your suppliers (i.e., the people .36 .02 .80
on whom you depend to do our job well) to
deliver on time?
8. Can you rely on your suppliers (i.e., the people .19 .01 .91
on whom you depend to do your job well) to
deliver exactly what you asked for?
9. Are the requirements of your internal customers .28 .02 .83
(i.e., the people within your company to whom
you supply, for instance, information, products,
materials or services) consistent?

Notes. N = 174 for sample 1 after excluding cases with missing data.
a
‘No or little’ intentional wording to emphasize ‘no’.

(v2 = 252.69, df = 97, p = .000; CFI = 0.90, TLI = 0.86, RMSEA = 0.11) and the one-
factor solution (v2 = 749.11, df = 100, p = .000; CFI = 0.57, TLI = 0.42,
RMSEA = 0.21). The chi-square difference (Dv2) test (Loehlin, 1992) supports the
superiority of the five-factor model because values exceed the recommended levels for
p < .001 (Dv2 between the five-model and the three-factor model = 136.43, df = 7; Dv2
between the five-model and the one-factor model = 632.85, df = 10). This indicates that
the five-factor model fits the data significantly better than the three-factor model and the
one-factor model. To further assess discriminant validity, we followed Fornell and
Larcker’s (1981) suggestion that the average variance extracted value should be higher
than the squared correlation between the dimensions. The results indicate that this
criterion was met (Table 2). Overall, the results provide support for the three uncertainty
subscales being distinct from clarity and job control.
Table 2 also shows that the uncertainty scales are uncorrelated except for resource
and input/output which are moderately and positively correlated (r = .55, p < .01),
reflecting the measurement of different aspects of supply-related uncertainty. In addition,
Table 2 shows reasonable relationships between clarity and resource uncertainty
(r = .34, p < .01) and clarity and input/output uncertainty (r = .28, p < .01), indicating
some support for convergent validity.
92 Desmond Leach et al.

Does the equipment you use work reliably? .66


.58 .73
.51
Resource .66 Is the supply of materials you need to do your job .56
uncertainty .80 well consistent? .36
.79
.79 Is the supply of information you need to do your .38
job consistent? .38
–.13
.12

Do your tasks vary on a day-to-day basis with no


.57
.66 or little warning? .43
.75
.66 Task
.84 uncertainty .79 Do you come across unexpected problems in your .37
.80 work? .36
.83
.94 Does the order in which you do tasks change with .40
no or little warning? .12
–.01
.08
Can you rely on your suppliers…to deliver on .32
.83 time? .10
.95

Input/output .87 Can you rely on your suppliers…to deliver exactly .24
.91 what you asked for? .17
uncertainty
.82
.77 Are the requirements of your internal .33
customers…consistent? .41

Figure 1. Confirmatory factor analysis (CFA) showing the factor loadings for three latent factors
(resource, task, and input/output uncertainty). Upper/lower factor loadings refer to sample 1/sample 2,
respectively.

Table 2. Descriptive statistics and bivariate correlations between uncertainty subscales (resource, task,
and input/output uncertainty) and other relevant constructs

Variables Mean SD a AVE 1 2 3 4 5

1. Resource 3.86 0.89 .73 .51 .01 .30 .12 .01


2. Task 2.81 1.21 .70 .66 .08 .01 .03 .03
3. Input/output 3.63 1.16 .77 .78 .55** .08 .08 .03
4. Clarity 5.45 0.94 .79 .82 .34** .16 .28** .03
5. Job control 3.92 1.07 .75 .69 .08 .17* .18* .16

Notes. AVE = average variance extracted.


The lower left triangle elements are zero-order correlations between the latent variables; the upper right
triangle elements are squared correlations.
*p < .05, **p < .01.

STAGE 3
Although support was found for the three-dimensional measure, a possibility is that item
groupings are a product or artefact of item wording (Greenberger, Chen, Dmitrieva, &
Farruggia, 2003; Jackson et al., 1993): the resource and input/output items are positively
worded, while the task items are negatively worded. This, however, was considered
during the item selection stage. More specifically, it was not our objective to balance
polarity per se but rather to include items that were the easiest to comprehend. This
resulted in a combination of positively and negatively keyed items that we considered
would enhance, rather than reduce, measure accuracy. The aim of the following CFA was
to examine the contribution of item wording to factor structure and to assess the validity
of our wording strategy.
Work uncertainty 93

Method
Sample and procedure
The sample comprised 188 UK full-time employees. Peers and colleagues of the UK-based
authors were invited to complete an anonymous online questionnaire, hosted at Bristol
Online Surveys (www.survey.bris.ac.uk). Invitations were sent by e-mail, which outlined
the purpose of the study and provided a link to the questionnaire. Participants were asked
to forward the e-mail to their colleagues to increase the final sample size. Given that the
sole purpose of this sample was to permit examination of item-wording effects, being
employed full time at the time of participation was the only inclusion criterion.

Measures
All nine uncertainty items were administered alongside a parallel set of items, rewritten to
counterbalance positively and negatively worded items (example alternative items:
resource uncertainty ‘Is the equipment you use unreliable?’; task uncertainty ‘Do your
daily tasks vary, with good notice?’; input/output uncertainty ‘Are you unable to rely on
your suppliers … to deliver on time?’). To evaluate the impact of negatively worded task
items on the factor structure, the CFA procedure was repeated on sample 3 data, replacing
the three resource and input/output uncertainty items with parallel sets of negatively
worded equivalents.

Results
The three-factor model fitted the data equally well with all criteria except v2, regardless of
whether all items were worded negatively (reflecting more uncertainty; v2 = 45.83,
df = 23, p < .001, CFI = 0.97, TLI = 0.95, RMSEA = 0.07) or whether only the task items
were worded negatively (task items only reflecting more uncertainty; v2 = 37.80, p = .03,
CFI = 0.98, TLI = 0.96, RMSEA = 0.06). In the all negative example, Lagrange multiplier
tests suggested that it was necessary to allow the residual variances of two items to
correlate: ‘Does the equipment you use work reliably?’ and ‘Is the supply of materials you
need to do your job well consistent?’ In the example where only task items were
negatively worded, two items also required correlated residuals: ‘Are you unable to rely on
your suppliers…to deliver on time?’ and ‘Are you unable to rely on your suppliers…to
deliver exactly what you asked for?’ Allowing residual errors to correlate is not considered
problematic here because the results from the multiple sample analysis suggested that the
scale has invariant residuals across both samples 1 and 2. Both pairs of correlated errors
occurred within constructs, not between them, further mitigating any concerns that
sample 3 differed from samples 1 and 2. The results illustrate that negatively worded items
do not explain the factor structure identified previously. Therefore, the three-factor
solution is considered robust and not sensitive to item-wording effects. As originally
worded, Cronbach’s alpha for resource, task, and input/output scales were .70, .87, and
.72, respectively.

STAGE 4
In this final stage, we conducted analysis similar to that of Wright and Cordery (1999). As
previously discussed (see Introduction), these researchers found a positive relationship
between job control (assessed objectively but confirmed by self-reports) and intrinsic
94 Desmond Leach et al.

satisfaction at higher levels of uncertainty, findings that support work design theory
(Parker et al., 2001). Our aim was to replicate their findings thereby demonstrating that
our self-report measure of uncertainty yields a pattern of results similar to that of an
objective measure (predictability of day-to-day operating characteristics). Of the three
scales of uncertainty, however, the task-related items most clearly resemble that of Wright
and Cordery’s measure. We therefore hypothesize that task uncertainty will moderate the
relationship between job control and intrinsic satisfaction such that this relationship will
be positive at higher levels of uncertainty and negative at lower levels.

Method
Sample
Wright and Cordery’s (1999) sample comprised plant operators. For consistency, this
stage involved the manufacturing employees only (see sample 1, Stage 2).

Measures
Intrinsic job satisfaction was measured using Warr, Cook, and Wall’s (1979) 7-item scale
(as per Wright & Cordery, 1999). These items ask respondents how satisfied they are with,
for instance, the recognition they receive for good work and the amount of responsibility
they are given (a = .81). Responses were recorded on a 7-point scale from ‘Extremely
dissatisfied’ (1) to ‘Extremely satisfied’ (7).
Job control was examined using the same items as reported in stage 2.

Statistical analysis
Conducting moderated regression analysis, task uncertainty was entered as a predictor of
intrinsic satisfaction holding age, gender, educational level, and organizational tenure
constant. A job control 9 task uncertainty term was included in the model, to capture any
modification of the main effect of job control under different levels of uncertainty. All
continuous variables were centred prior to analysis to facilitate interpretation of the
coefficients.

Results
Of the control variables, higher age significantly predicted higher intrinsic job
satisfaction (b = .02, p = .04). As predicted, a significant interaction term was observed
between job control and task uncertainty (b = .26, p < .001). Post-hoc probing of this
interaction using regions of significance revealed that there was a significant, positive
relationship between task uncertainty and intrinsic job satisfaction when job control was
above 3.79, and there was a significant, negative relationship when job control was
below 1.90. A plot of this moderated relationship is shown in Figure 2 (as per Wright &
Cordery, 1999).

GENERAL DISCUSSION
The aim of this study was to develop a short easily administered and generally applicable
survey measure with which to assess employee perceptions of work uncertainty.
Work uncertainty 95

5.5

Intrinsic job satisfaction


5

4.5

3.5 Low control Medium control


High control
3
Low task uncertainty Medium task High task uncertainty
uncertainty

Figure 2. Task uncertainty predicts higher intrinsic satisfaction at higher levels of control and lower
intrinsic satisfaction at lower levels of control (low and high levels represent one SD below and above
mean).

The findings indicate good internal consistency and construct validity for a multi-
dimensional measure comprising three distinct scales: resource, task, and input/output
uncertainty. We found evidence that the scales are distinct from related constructs,
namely process clarity and job control. Furthermore, as predicted, task uncertainty was
found to moderate relationships between variables in a manner similar to that of an
objective measure of uncertainty, suggesting that self-report and objective measures of
uncertainty yield equivalent results (Ohly, Sonnentag, & Pluntke, 2006). Given that data
were collected from three diverse samples, we contend that the scales provide the means
to examine uncertainty in manufacturing and non-manufacturing settings.
Although the three scales of uncertainty arguably cover core aspects of work, we
would nevertheless encourage identification and examination of other potential sources
of uncertainty (cf. Morgeson, Dierdorff, & Hmurovic, 2010). In particular, we did not
examine the work behaviours of colleagues as a source of uncertainty (cf. De Cremer
et al., 2010). In addition, we focused on individual perceptions of uncertainty. A natural
extension to the present study would be the development of a team-level measure of
uncertainty: ‘Is the supply of information your team needs to do its work consistent?’
‘Does your team come across unexpected problems in its work?’ Given the prevalence of
teamworking and parallel assumptions concerning autonomy outcomes under varying
conditions of uncertainty (Cordery et al., 2010), this would be a worthy endeavour.
In regard to recent conceptual developments within the field of work design, self-
report measures of uncertainty could be used to appraise job crafting (Wrzesniewski &
Dutton, 2001) and dynamic models of job design (Clegg & Spencer, 2007). Job crafting is a
form of proactive behaviour (Grant & Parker, 2009) that treats job adjustments as a
bottom-up (employee driven) rather than a top-down (supervisor/manager driven)
process. A key proposition is that ‘autonomy in the job leads to perceived opportunities
for job crafting …’ (Wrzesniewski & Dutton, 2001, p. 184). We contend, though, that job
control in conjunction with perceptions of uncertainty would be most predictive of job
crafting and associated outcomes (e.g., enhanced work meaningfulness). Within Clegg
and Spencer’s (2007) circular model of job design, both bottom-up and top-down forms of
job change are discussed. Again, we assert that the proposed outcomes of this account of
the job design process (e.g., gains in performance) would be particularly dependent upon
prevailing levels of uncertainty. To examine the role of uncertainty in determining the
96 Desmond Leach et al.

outcomes of such models, our measures of uncertainty should be of value in permitting


comparisons across different work settings.
Consistent with work design theory (Parker et al., 2001; Wall & Jackson, 1995) and
empirical studies (Cordery et al., 2010; Leach, Wall, & Jackson, 2003), it is important that
the level of control or autonomy and the degree of uncertainty that employees experience
are congruent. For instance, affording employees greater control under conditions of low
uncertainty might not yield expected attitudinal and/or performance gains (Wall et al.,
1990; Wright & Cordery, 1999). As such, the core practical contribution of the present
study concerns use of the uncertainty scales by managers/supervisors/practitioners to
assess the work context prior to job/work redesign.
Although our findings are encouraging, a limitation of the present study concerns the
modest assessment of convergent validity. Our study provides some support for
convergent validity because resource and input/output uncertainty positively and
significantly correlate with the related construct of clarity. However, to better assess
convergent validity, it would be desirable to test whether our scales relate to other similar
constructs, such as problem-solving demands (Wall, Jackson, & Mullarkey, 1995).
Furthermore, a potential limitation concerns the use of a common instrument with which
to collect data or common method bias (Podsakoff, MacKenzie, Lee, & Podsakoff, 2003).
This effect cannot be ruled out, but the significant interaction – consistent with theory –
suggests that it is unlikely that method bias was problematic (Evans, 1985). In addition,
exploratory and confirmatory analyses indicate that a multi-factor model, rather than a
single factor, represents the best fit to the data. To test the measures more robustly than we
were able to do in the present study, however, longitudinal research designs would be
informative. For instance, it would be worthwhile comparing perceptual ratings with
objective records. Data collected on multiple occasions would permit a rigorous
examination of the accuracy of the self-report measures, particularly if an incident were to
occur that resulted in an increase or decrease in the level of uncertainty. Importantly, this
type of research design (incorporating a control group, if practicable) would also allow
examination of the effects of any such incidents/change, that is, depending on initial levels
of uncertainty (self-report and objective), hypothesis testing in connection with employee
well-being, work attitudes, and performance could be undertaken (Wall et al., 2002).
Lastly, although studies conducted over the longer term would be informative, variation in
type and level of uncertainty might nevertheless occur on a daily basis or throughout the
day. Through the use of experience sampling techniques (Daniels, Boocock, Glover,
Hartley, & Holland, 2009), a more nuanced understanding of relationships between
uncertainty, job/work design, and outcomes (e.g., well-being, performance) might be
achievable. These types of study, therefore, would not only provide a finer-grained analysis
of validity than we were able to undertake in the present study, but would also contribute
more generally to theoretical and practical understanding of work design.

References
Anderson, J. C., & Gerbing, D. W. (1988). Structural equation modeling in practice: A review and
recommended two-step approach. Psychological Bulletin, 103, 411–423. doi:10.1037/0033-
2909.103.3.411
Argote, L. (1982). Input uncertainty and organizational coordination in hospital emergency units.
Administrative Science Quarterly, 27, 420–434. doi:10.2307/2392320
Work uncertainty 97

Atuahene-Gima, K., & Li, H. (2004). Strategic decision comprehensiveness and new product
development outcomes in new technology ventures. Academy of Management Journal, 47,
583–597. doi:10.2307/20159603
Bennett, R. J., & Robinson, S. L. (2000). Development of a measure of workplace deviance. Journal
of Applied Psychology, 85, 349–360. doi:10.1037/0021-9010.85.3.349
Brown, T. A. (2006). Confirmatory factor analysis for applied research. New York: The Guilford
Press.
Burns, T., & Stalker, G. M. (1961). The management of innovation. London, UK: Tavistock.
Chen, G., Gully, S. M., & Eden, D. (2001). Validation of a new general self-efficacy scale.
Organizational Research Methods, 4, 62–83. doi:10.1177/109442810141004
Cherns, A. A. (1976). The principles of socio-technical systems design. Human Relations, 29, 783–
792. doi:10.1177/001872677602900806
Clegg, C., & Spencer, C. (2007). A circular and dynamic model of the process of job design. Journal
of Occupational and Organizational Psychology, 80, 321–339. doi:10.1348/096317906
X113211
Clegg, C. W., Ravden, S. J., Corbett, J. M., & Johnson, G. I. (1989). Allocating function in computer-
aided manufacturing: A review and a new method. Behaviour and Information Technology, 8,
175–190. doi:10.1080/01449298908914550
Cordery, J. L., Morrison, D., Wright, B. M., & Wall, T. D. (2010). The impact of autonomy and task
uncertainty on team performance: A longitudinal field study. Journal of Organizational
Behavior, 31, 240–258. doi:10.1002/job.657
Cummings, T., & Blumberg, M. (1987). Advanced manufacturing technology and work design. In T.
D. Wall, C. W. Clegg & N. J. Kemp (Eds.), The human side of advanced manufacturing
technology (pp. 37–60). New York: Wiley.
Daniels, K., Boocock, G., Glover, J., Hartley, R., & Holland, J. (2009). An experience sampling study
of learning, affect, and the demands control support model. Journal of Applied Psychology, 94,
1003–1017. doi:10.1037/a0015517
Datta, D. K., Guthrie, J. P., & Wright, P. M. (2005). Human resource management and labor
productivity: Does industry matter? Academy of Management Journal, 48, 135–145.
doi:10.5465/AMJ.2005.15993158
De Cremer, D., Brockner, J., Fishman, A., van Dijke, M., van Olffen, W., & Mayer, D. M. (2010).
When do procedural fairness and outcome fairness interact to influence employees’ work
attitudes and behaviors? The moderating effect of uncertainty. Journal of Applied Psychology,
95, 291–304. doi:10.1037/a0017866
Douglas, T. J., & Judge, W. Q. (2001). Total quality management implementation and competitive
advantage: The role of structural control and exploration. Academy of Management Journal,
44, 158–169. doi:10.2307/3069343
Evans, M. G. (1985). A Monte Carlo study of the effects of correlated method variance in moderated
multiple regression analysis. Organizational Behavior and Human Decision Processes, 36,
305–323. doi:10.1016/0749-5978(85)90002-0
Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation models with unobservable
variables and measurement error. Journal of Marketing Research, 18, 39–50. doi:10.2307/
3151312
Grant, A. M., & Parker, S. K. (2009). Redesigning work design theories. The Academy of
Management Annals, 3, 317–375. doi:10.1080/19416520903047327
Greenberger, E., Chen, C., Dmitrieva, J., & Farruggia, S. P. (2003). Item-wording and the
dimensionality of the Rosenberg self-esteem scale: Do they matter? Personality and Individual
Differences, 35, 1241–1254. doi:10.1016/S0191-8869(02)00331-8
Griffin, M. A., Neal, A., & Parker, S. K. (2007). A new model of work role performance: Positive
behavior in uncertain and interdependent contexts. Academy of Management Journal, 50,
327–347. doi:10.5465/AMJ.2007.24634438
Grote, G. (2007). Understanding and assessing safety culture through the lens of organizational
management of uncertainty. Safety Science, 45, 637–652. doi:10.1016/j.ssci.2007.04.002
98 Desmond Leach et al.

Grote, G. (2009). Management of uncertainty. London, UK: Springer.


Hair, J. F., Anderson, R. E., Tatham, R. L., & Black, W. C. (1998). Multivariate data analysis.
(5th ed.) Englewood Cliffs, NJ: Prentice Hall.
Hinkin, T. R. (1998). A brief tutorial on the development of measures for use in survey
questionnaires. Organizational Research Methods, 1, 104–121. doi:10.1177/
109442819800100106
Hult, G. T. M., Craighead, C. W., & Ketchen, D. J. (2010). Risk uncertainty and supply chain
decisions: A real options perspective. Decision Sciences, 41, 435–458. doi:10.1111/j.1540-
5915.2010.00276.x
Jackson, P. R., Wall, T. D., Martin, R., & Davids, K. (1993). New measures of job control, cognitive
demands, and production responsibility. Journal of Applied Psychology, 78, 753–762.
doi:10.1037/0021-9010.78.5.753
Jackson, S. E. (1989). Does job control control job stress? In S. L. Sauter, J. J. Hurrell & C. L. Cooper
(Eds.), Job control and worker health (pp. 25–53). New York: Wiley.
Lawrence, P. R., & Lorsch, J. (1967). Organization and the environment. Cambridge, MA: Harvard
University Press.
Leach, D. J., Wall, T. D., & Jackson, P. R. (2003). The effect of empowerment on job knowledge: An
empirical test involving operators of complex technology. Journal of Occupational and
Organizational Psychology, 76, 27–52. doi:10.1348/096317903321208871
Loehlin, J. C. (1992). Latent variable models: An introduction to factor, path, and structural
analysis. Hillsdale, NJ: Erlbaum.
Morgeson, F. P., Dierdorff, E. E., & Hmurovic, J. L. (2010). Work design in situ: Understanding the
role of occupational and organizational context. Journal of Organizational Behavior, 31, 351–
360. doi:10.1002/job.642
Mullarkey, S., Jackson, P. R., Wall, T. D., Wilson, J. R., & Grey-Taylor, S. M. (1997). The impact of
technology characteristics on worker mental health. Journal of Organizational Behavior, 18,
471–489. doi:10.1002/(SICI)1099-1379(199709)18:5<471::AID-JOB810>3.0.CO;2-V
Ohly, S., Sonnentag, S., & Pluntke, F. (2006). Routinization, work characteristics and their
relationship with creative and proactive behaviours. Journal of Organizational Behavior, 27,
257–279. doi:10.1002/job.376
Parker, S. K., & Wall, T. D. (1998). Job and work design. London, UK: Sage.
Parker, S. K., Wall, T. D., & Cordery, J. L. (2001). Future work design research and practice:
Towards an elaborated model of work design. Journal of Occupational and Organizational
Psychology, 74, 413–440. doi:10.1348/096317901167460
Perrow, C. (1967). A framework for the comparative analysis of organizations. American
Sociological Review, 32, 194–208. doi:10.2307/2091811
Perrow, C. (1970). Organizational analysis: A sociological view. Belmont, CA: Wadsworth.
Podsakoff, P. M., Ahearne, M., & MacKenzie, S. B. (1997). Organizational citizenship behavior
and the quantity and quality of work group performance. Journal of Applied Psychology, 82,
262–270. doi:10.1037/0021-9010.82.2.262
Podsakoff, P. M., MacKenzie, S. B., Lee, J., & Podsakoff, N. P. (2003). Common method biases in
behavioral research: A critical review of the literature and recommended remedies. Journal
of Applied Psychology, 88, 879–903. doi:10.1037/0021-9010.88.5.879
Pollard, T. M. (2001). Changes in mental well-being, blood pressure and total cholesterol levels
during workplace reorganization: The impact of uncertainty. Work & Stress, 15, 14–28.
doi:10.1080/02678370110064609
Sawyer, J. E. (1992). Goal and process clarity: Specification of multiple constructs of role ambiguity
and a structural equation model of their antecedents and consequences. Journal of Applied
Psychology, 77, 130–142. doi:10.1037/0021-9010.77.2.130
Sitkin, S. B., Sutcliffe, K. M., & Schroeder, R. G. (1994). Distinguishing control from learning in total
quality management: A contingency perspective. Academy of Management Review, 19, 537–
564. doi:10.5465/AMR.1994.9412271813
Work uncertainty 99

Slocum, J. W., & Sims, H. P. (1980). A typology for integrating technology, organization and job
design. Human Relations, 33, 193–212. doi:10.1177/001872678003300304
Song, M., & Montoya-Weiss, M. M. (2001). The effect of perceived technological uncertainty on
Japanese new product development. Academy of Management Journal, 44, 61–80.
doi:10.2307/3069337
Tabachnick, B. G., & Fidell, L. S. (2007). Using multivariate statistics. (5th International ed.)
Boston, MA: London: Pearson/A&B.
Wall, T. D., Corbett, M. J., Martin, R., Clegg, C. W., & Jackson, P. R. (1990). Advanced manufacturing
technology, work design, and performance: A change study. Journal of Applied Psychology,
75, 691–697. doi:10.1037/0021-9010.75.6.691
Wall, T. D., Cordery, J. L., & Clegg, C. W. (2002). Empowerment, performance, and operational
uncertainty: A theoretical integration. Applied Psychology: An International Review, 51, 146–
169. doi:10.1111/1464-0597.00083
Wall, T. D., & Jackson, P. R. (1995). New manufacturing initiatives and shopfloor work design. In A.
Howard (Ed.), The changing nature of work (pp. 139–174). San Francisco, CA: Jossey-Bass.
Wall, T. D., Jackson, P. R., & Mullarkey, S. (1995). Further evidence on some new measures of job
control, cognitive demand and production responsibility. Journal of Organizational Behavior,
16, 431–455. doi:10.1002/job.4030160505
Warr, P. B., Cook, J., & Wall, T. D. (1979). Scales for the measurement of some work attitudes and
aspects of psychological well-being. Journal of Occupational Psychology, 52, 129–148.
doi:10.1111/j.2044-8325.1979.tb00448.x
Wright, B. M., & Cordery, J. L. (1999). Production uncertainty as a contextual moderator of employee
reactions to job design. Journal of Applied Psychology, 84, 456–463. doi:10.1037/0021-
9010.84.3.456
Wright, P. M., & Snell, S. A. (1998). Toward a unifying framework for exploring fit and flexibility in
strategic human resource management. Academy of Management Review, 23, 756–772.
doi:10.5465/AMR.1998.1255637
Wrzesniewski, A., & Dutton, J. E. (2001). Crafting a job: Revisioning employees as active crafters of
their work. Academy of Management Review, 26, 179–201. doi:10.5465/AMR.2001.4378011

Received 2 March 2012; revised version received 3 October 2012

You might also like