Validating Instruments in MIS Research

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 24

Validating Instruments in MIS Research

Author(s): Detmar W. Straub


Source: MIS Quarterly , Jun., 1989, Vol. 13, No. 2 (Jun., 1989), pp. 147-169
Published by: Management Information Systems Research Center, University of
Minnesota

Stable URL: https://www.jstor.org/stable/248922

JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide
range of content in a trusted digital archive. We use information technology and tools to increase productivity and
facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org.

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at
https://about.jstor.org/terms

Management Information Systems Research Center, University of Minnesota is collaborating


with JSTOR to digitize, preserve and extend access to MIS Quarterly

This content downloaded from


136.233.110.66 on Sun, 24 Mar 2024 19:28:01 +00:00
All use subject to https://about.jstor.org/terms
Validating Research Instruments

Validating Keywords: MIS research methodology, empiri-


cal measurement, theory construc-
Instruments in MIS tion and development, content va-
lidity, reliability, construct validity,
Research1 internal validity

ACM Categories: H.O, J.O

By: Detmar W. Straub


Information and Decision Sciences
Introduction
Department
Curtis L. Carlson School of Instrument validation has been inadequately ad-
dressed in MIS research. Only a few research-
Management ers have devoted serious attention to measure-
University of Minnesota ment issues over the last decade (e.g., Bailey
Minneapolis, Minnesota 55455 and Pearson, 1983; Goodhue, 1988; Ives, et al.,
1983; Ricketts and Jenkins, 1985), and while the
desirability of verifying findings through internal
validity checks has been argued by Jarvenpaa,
et al. (1984), the primary and prior value of in-
strument validation has yet to be widely
Abstract recognized.

Calls for new directions in MIS research bring There are undoubtedly a number of reasons for
with them a call for renewed methodological this lack of attention to instrumentation. Because
of rapid changes in technology, MIS research-
rigor. This article offers an operating paradigm
ers often feel that research issues must be han-
for renewal along dimensions previously un-
stressed. The basic contention is that confirma- dled with dispatch (Hamilton and Ives, 1982b).
tory empirical findings will be strengthened Then, too, exploratory, qualitative, and non-
when instrument validation precedes both in- empirical methodologies that have dominated
ternal and statistical conclusion validity and that, MIS research (Farhoomand, 1987; Hamilton and
in many situations, MIS researchers need to vali- Ives, 1982b) may not require the same level of
date their research instruments. This contention validation.2

is supported by a survey of instrumentation as


Exactly why, then, should MIS researchers pay
reported in sample IS journals over the last sev-
closer attention to instrumentation? The ques-
eral years. tion is a fair one and deserves a full answer.
A demonstration exercise of instrument valida- In the first place, concerns about instrumenta-
tion follows as an illustration of some of the tion are intimately connected with concerns
basic principles of validation. The validated in- about rigor in MIS methodology in general (McFar-
strument was designed to gather data on the lan, 1984). McFarlan and other key researchers
impact of computer security administration on in the field (e.g., Hamilton and Ives, 1982a;
the incidence of computer abuse in the U.S.A. 1982b) believe there is a pressing need to define
meaningful MIS research traditions to guide and
shape the research effort, lest the field fragment
and never achieve its own considerable poten-
This work was carried out under the auspices of In- tial. The sense is that MIS researchers need to
ternational DPMA (Data Processing Management As-
concentrate their energies on profitable lines of
sociation). It was supported by grants from IBM,
IRMIS (Institute for Research on the Management
research - streams of opportunity - in both
of Information Systems, Indiana University Graduate
the near and the far term (Jenkins, 1983; 1985;
School of Business) and the Ball Corporation McFarlan, 1984). This should be carried out,
Foundation. whenever possible, in a systematic and program-
An earlier version of this article appeared as matic manner (Hunter, et al., 1983; Jenkins,
"Instrument Validation in the MIS Research Process,"
Proceedings of the Annual Administrative Sciences
Association of Canada Conference, F. Bergeron 2Cf. Campbell (1975), however, Benbasat, et al.
(ed.), Whistler, B.C., June 1986, pp. 103-118. (1987), also deal with this question.

MIS Quarterly/June 1989 147

This content downloaded from


136.233.110.66 on Sun, 24 Mar 2024 19:28:01 +00:00
All use subject to https://about.jstor.org/terms
Validating Research Instruments

1983, 1985; Mackenzie and House, 1979), nomena, as many methodologists have pointed
using, where appropriate (McFarlan, 1986), ad- out (Glaser and Strauss, 1967; Lincoln and
vanced statistical and scientific methods (King, Gupta, 1985). This article focuses on the meth-
1985). odologies and validation techniques most often
found on the confirmatory side of the research
Besides bringing more rigor in general to the cycle, as shown in Figure 1. The key point is
scientific endeavor, greater attention to instru- that confirmatory research calls for rigorous in-
mentation promotes cooperative research efforts strument validation as well as quantitative analy-
(Hunter, et al., 1983) in permitting confirmatory, sis to establish greater confidence in its findings.
follow-up research to utilize a tested instrument.
By allowing other researchers in the stream to
use this tested instrument across heterogenous
settings and times, greater attention to instru- Linking MIS Research and
mentation also supports triangulation of results Validation Processes
(Cook and Campbell, 1979). With validated in-
struments, researchers can measure the same In order to understand precisely what instrument
validation is and how it functions, it is placed
research constructs in the same way, granting
improved measurement of independent and de-
in the context of why researchers - MIS re-
searchers in particular - measure variables in
pendent variables and, in the long run, helping
the first. place and how attention to measure-
to relieve the confounding that plagues many
streams of MIS literature (cf. Ives and Olson, ment issues - the validation process - can
1984). undergird bases of evidence and inference. In
the scientific tradition, social science research-
Attention to instrumentation issues also brings ers attempt to understand real-world phenom-
greater clarity to the formulation and interpreta- ena through expressed relationships between re-
tion of research questions. In the process of vali- search constructs (Blalock, 1969). These
dating an instrument, the researcher is engaged, constructs are not in themselves directly observ-
in a very real sense, in a reality check. He or able, but are believed to be latent in the phe-
she finds out in relatively short order how well nomenon under investigation (Bagozzi, 1979).
conceptualization of problems and solutions In sociology, for example, casual constructs like
matches with actual experience of practitioners. "home environment" are thought to influence out-
And, in the final analysis, this constant compari- come constructs like "success in later life." Nei-
son of theory and practice in the process of vali- ther construct can be observed directly, but be-
dating instruments results in more "theoretically haviorally relevant measures can be operation-
meaningful" variables and variable relationships alized to represent or serve as surrogates for
(Bagozzi, 1980). these constructs (Campbell, 1960).

Finally, lack of validated measures in confirma- In MIS, research constructs in the user involve-
tory research raises the specter that no single ment and system success literature offer a case
finding in the study can be trusted. In many in point. In this research stream, user involve-
cases this uncertainty will prove to be inaccu- ment is believed to be a "necessary condition"
rate, but, in the absence of measurement vali- (Ives and Olson, 1984, p. 586) for system suc-
dation, it lingers. cess. As with other constructs in the behavioral
sciences (Campbell, 1960), system success, or
This call for renewed scientific rigor should not "the extent to which users believe the informa-
be interpreted in any way as a preference of tion system available to them meets their infor-
quantitative to qualitative techniques, or confir- mation requirements" (Ives and Olson, 1984, p.
matory3 to exploratory research. In what has 586), is unobservable and unmeasurable; it is,
been termed the "active realist" view of science
in short, a conceptualization or a mental con-
(Cook and Campbell, 1979), each has a place struct. In order to come to some understanding
in uncovering the underlying meaning of phe- about system effectiveness, therefore, research-
ers in this area have constructed a surrogate
3 Strictly speaking, and from the point of view of Strong
set of behaviorally relevant measures, termed
Inference, no scientific explanation is ever confirmed
(Blalock, 1969; Popper, 1968); incorrect explanations
User Information Satisfaction (UIS), for the
are simply eliminated from consideration. The termi- system success outcome construct (Bailey and
nology is adopted here for the sake of convenience, Pearson, 1983; Ives, et al., 1983; Ricketts and
however. Jenkins, 1985).

148 MIS Quarterly/June 1989

This content downloaded from


136.233.110.66 on Sun, 24 Mar 2024 19:28:01 +00:00
All use subject to https://about.jstor.org/terms
Validating Research Instruments

* Quantitative,
empirical
techniques
* Theory-testing

Exploratory Confirmatory
Research Research

* Qualitative,
non-empirical
techniques ,i Conceptual
Refinements
* Theory-building N

* Grounded theory

Figure 1. The Scientific Research Cycle


(Based on Mackenzie and House, 1979 and McGtath, 1979)

The place of theory in confirmatory able for application to MIS. This point has been
effectively made by Dickson, et al. (1980). A spe-
research
cific need for strong reference discipline theory
Given this relationship between unobserved and in support of the user involvement thesis has
observed variables, what is the role of theory been put forth by Ives and Olson (1984).
in the process? Blalock (1969), Bagozzi (1980),
and others argue that theories are invaluable in
confirmatory research because they pre-specify
the makeup and structure of the constructs and Difficulties in accurately measuring
seed the ground for researchers who wish to constructs
conduct further studies in the theoretical stream. Measurement of research constructs is neither
This groundwork is especially important in sup- simple nor straightforward. Instrumentation that
porting programs of research (Jenkins, 1985). the UIS researcher devises to translate the UIS
By confining constructs and measures to a construct (as perceived by user respondents)
smaller a priori domain (Churchill, 1979; Ha- into data, for instance, may be significantly af-
nushek and Jackson, 1977) and thereby reduc- fected by choice of method itself (as in inter-
ing the threat of misspecification, use of theory views versus paper-and-pencil instruments) and
also greatly strengthens findings. Moreover, se- components of the chosen method (as in item
lection of an initial set of items for a draft instru- selection and item phrasing (Ives, et al., 1983)).
ment from the theoretical and even non-theo- Bias toward outcomes the researcher is expect-
retical literature simplifies instrument develop- ing - in this case a positive relationship be-
ment. When fully validated instruments are tween user involvement and system success -
available, replication of a study in heterogenous can subtly or overtly infuse the instrument. Inac-
curacies in measurement can also be reflected
settings is likewise facilitated (Cook and
Campbell, 1979). in the instrument when items are ambiguously
phrased, length of the instrument taxes respon-
For MIS research, there is much to be gained dents' concentration (Ives, et al., 1983), or moti-
by basing instruments on reference discipline vation for answering carefully is not induced.
theories. Constructs, relationships between con- Knowledge about the process of system devel-
structs, and operations are often already well- opment, therefore, is only bought with assump-
specified in the reference disciplines and avail- tions about the "goodness" of the technique of

MIS Quarterly/June 1989 149

This content downloaded from


136.233.110.66 on Sun, 24 Mar 2024 19:28:01 +00:00
All use subject to https://about.jstor.org/terms
Validating Research Instruments

measurement (Cook and Campbell, 1979; Instrument Validation


Coombs, 1964).
Content Are instrument measures
In a perfectly valid UIS instrument, data meas- Validity drawn from all possible
urements completely and accurately reflect the measures of the properties
unobservable research construct, system suc- under investigation?
cess. Given real-world limitations, however, Construct Do measures show stability
some inaccurate measurements inevitably ob- Validity across methodologies? That is,
are the data a reflection of
trude on the translation process. The primary true scores or artifacts of the
question for MIS researchers is the extent to kind of instrument chosen?
which these translation difficulties affect findings;
Reliability Do measures show stability
in short, we need to have a sense for how good across the units of observation?
our instruments really are. Fortunately, instru- That is, could measurement
mentation techniques are available that allow error be so high as to discredit
MIS researchers to altemately construct and vali- the findings?
date a draft instrument that will ultimately result
in an acceptable research instrument.

Internal Are there untested rival hypo-


Instrument validation
Validity theses for the observed effects?
In the MIS research process, instrument valida-
tion should precede other core empirical validi-
ties (Cook and Campbell, 1979),4 which are set
forth according to the kinds of questions they Statistical Do the variables demonstrate
answer in Figure 2. Researchers and those who Conclusion relationships not explainable by
will utilize confirmatory research findings first Validity chance or some other standard
need to demonstrate that developed instruments of comparison?
are measuring what they are supposed to be
Figure 2. Questions Answered
measuring. Most univariate and multivariate sta-
by the Validities
tistical tests, including those commonly used to
test internal validity and statistical conclusion va-
lidity, are based on the assumption that error be more expressive of the true mean than one
terms between observations are uncorrelated that has drawn idiosyncratic questions from the
(Hair, et al., 1979; Hanushek and Jackson, 1977; set of all possible items. Bias generated by an
Lindman, 1974; Reichardt, 1979). If subjects unrepresentative instrument will carry over into
answer in some way that is more a function of uncertainty of results. A content-valid instrument
the instrument than the true score, this assump- is difficult to create and perhaps even more dif-
tion is violated. Since the applicable statistical ficult to verify because the universe of possible
tests are generally not robust to violations of this content is virtually infinite. Cronbach (1971) sug-
assumption (Lindman, 1974), parameter esti- gests a review process whereby experts in the
mates are likely to be unstable. Findings, in fact, field familiar with the content universe evaluate
will have little credibility at all. versions of the instrument again and again until
a form of consensus is reached.
An instrument can be deemed invalid on
Construct validity is in essence an operational
grounds of the content of the measurement
issue. It asks whether the measures chosen are
items. An instrument valid in content is one that
true constructs describing the event or merely
has drawn representative questions from a uni-
versal pool (Cronbach, 1971; Kerlinger, 1964).artifacts of the methodology itself (Campbell and
Fiske, 1959; Cronbach, 1971). If constructs are
With representative content, the instrument will
valid in this sense, one can expect relatively high
correlations between measures of the same con-
4 External validity, which deals with persons, settings,
struct using different methods and low correla-
and times to which findings can be generalized, does
tions between measures of constructs that are
precede instrument validation in planning a research
project. For the sake of brevity and because this va-expected to differ (Campbell and Fiske, 1959).
The construct validity of an instrument can be
lidity can easily be discussed separately, external va-
lidity is not discussed here. assessed through multitrait-multimethod

150 MIS Quarterly/June 1989

This content downloaded from


136.233.110.66 on Sun, 24 Mar 2024 19:28:01 +00:00
All use subject to https://about.jstor.org/terms
Validating Research Instruments

(MTMM) techniques (Campbell and Fiske, 1959)


Internal validity
or techniques such as confirmatory or principal
components factor analysis (Long, 1983; Nun-
nally, 1967).5 Measures (termed "traits" in this Internal validity raises the question of whether
style of analysis) are said to demonstrate con- the observed effects could have been caused
vergent validity when the correlation of the same by or correlated with a set of unhypothesized
trait and varying methods is significantly differ- and/or unmeasured variables. In short, are there
ent from zero and converges enough to warrant viable, rival explanations for the findings other
"further examination" (Campbell and Fiske, than the single explanation offered by the re-
1959, p. 82). Evidence that it is also higher than searcher's hypothesis (or hypotheses)? For gen-
correlations of that trait and different traits using eral social science research, the subject has
both the same and different methods shows that
been treated at length by psychometricians Cook
the measure has discriminant validity.6 and Campbell (1979) who argue that causation
requires ruling out rival hypotheses as well as
Since this systematic bias, or common method
finding associative variables. In MIS, the critical
variance, is by definition unobservable, as are
importance of internal validity has been argued
the constructs themselves (Nunnally, 1967),
by Jarvenpaa, et al. (1984).
good examples of what can now be termed con-
struct validity are difficult to come by. Neverthe-
less, patterns of correlated errors can occur. For It is crucial to recognize that internal validity in
example, take the case where large numbers no way establishes that the researcher is work-
ing with variables that truly reflect the phenome-
of students unthinkingly check off entire columns
in a course evaluation. If these same students non under investigation. Sample groups, too, can
were probed later for comparable information in be easily misdefined if the instrumentation is in-
a circumstance where thoughtfulness and fair- valid. This point is made clearer through a hy-
ness prevailed, there would likely be little agree-pothetical case.
ment across methods, raising doubt about the
validity of either instrument. Suppose, for the sake of illustration, that a re-
On the other hand, scores can even be distortedsearcher wished to detect the effect of system
when there is no systematic response within theresponse time on user satisfaction in a record-
instrument. A single item can be ambiguous or keeping application. Suppose also that the re-
misunderstood by individuals so that responses searcher did not include in the methodological
on this trait differ among alternative measuresdesign a validation of the instrument by which
of the same trait. The subject has answered in user satisfaction was being measured. Such an
a way that is a function of his or her misunder-instrument could be utilized in field or laboratory
standing rather than a variation of the true score. experimentation, quasi-experimentation, or
Reliability, hence, is essentially an evaluation of survey research. Suppose, moreover, that had
measurement accuracy - for example, the the researcher actually tested the instrument, se-
extent to which the respondent can answer the rious flaws in the representativeness of meas-
same or approximately the same questions the ures (content validity), the meaningfulness of con-
same way each time (Cronbach, 1951). High cor-structs as measured (construct validity), or
relations between alternative measures or large stability of measures (reliability) would have
Cronbach alphas are usually signs that the meas-emerged. At least one of these flaws would
ures are reliable. almost certainly occur, for instance, if the user
satisfaction measure was based entirely on how
the user felt about EDP applications as a whole.7
5 It should be noted that factor analysis has been used In this scenario, extensive experimental and sta-
to test the factorial composition of a data set (Nun- tistical controls placed on these meaningless and
nally, 1967) even when maximally different data col-
lection methods are not used. Most of the validation poorly measured constructs could seemingly rule
of User Information Satisfaction instruments has fol- out all significant rival hypotheses. Internal va-
lowed this procedural line. lidity would thus be assured beyond a reason-
able doubt. Findings, however, would be moot
6 Concurrent and predictive validity (Cronbach and because measurement was suspect.
Meehl, 1955) are generally considered to be sub-
sumed in construct validity (Mitchell, 1985) and, for
this reason, will not be discussed here. 7 Cf. Ricketts and Jenkins (1985) on this point.

MIS Quarterly/June 1989 151

This content downloaded from


136.233.110.66 on Sun, 24 Mar 2024 19:28:01 +00:00
All use subject to https://about.jstor.org/terms
Validating Research Instruments

Statistical conclusion validity Statistical assumptions made b


of choice (e.g., regression, M
Statistical conclusion validity is an assessment
analysis, LISREL) have a beari
of the mathematical relationships between vari-
bility of the analysis, but co
ables and the likelihood that this mathematical
these statistics say nothing a
assessment provides a correct picture of the true
of rival hypotheses per se or
covariation (Cook and Campbell, 1979). Incor-
ness of constructs in the firs
rect conclusions concerning covariation (Type I
firmatory MIS research in th
and Type II error) are violations of statistical con-
only statistical conclusion valid
clusion validity, and factors such as the sample sults, a situation that can oft
size and reliability of measures can affect this
founded results because hig
validity.
cause and effect is only one o
Another factor used in determining the statisti- establishing causality (Cook
cal conclusion validity of a study is statistical 1979).
power. Power is the probability that the null hy-
pothesis has been correctly rejected. Proper re- In summary, it is possible to show the overall
jection is closely associated with sample size so results of violating order or position in the vali-
that tests with larger sample sizes are less likely dation process. Figure 3 highlights the dangers.
to reject the null hypothesis improperly (Baroudi Evaluating statistical conclusion validity alone es-
and Orlikowski, 1989; Cohen, 1969; 1977; tablishes that variables covary or have some
Kraemer and Thiemann, 1987). It is also statis- mathematical association. Without prior valida-
tically related to alpha, the standard error, or re- tion, it is not possible to rule out the possibility
liability of the sample results, and the effect size, that statistical associations are caused by mod-
or degree to which the phenomenon has practi- erator variables (Sharma, et al., 1981) or mis-
cal significance (Cohen, 1969). Non-significant specifications in the causal model (Blalock,
results from tests with low power, i.e., a prob-1969). Preceding statistical conclusion validity
ability of less than 80 percent that the null hy-
with internal validation procedures strengthens
pothesis has been correctly rejected (Cohen, findings by allowing the researcher to control ef-
1969), are inconclusive and do not indicate that fects from moderator variables and rival hypothe-
the effect is truly not present. ses. Even these tests do not establish, however,

Validity Touchstones Outcomes

Mathematical relationships between


Statistical ' the hypothesized study variables do
Conclusion exist; relationships between some
untested variables may also exist;
Validity variables may or may not be pre-
Established, sumed research concepts (con-
structs).

Mathematical relationships are ex-


Internal 1 'Statistical
plained by the hypothesized study
InternVald Conclusion
Validity Validity variables and only these variables;
variables may or may not be pre-
Established Established
Established sumed research concepts (con-
structs).

The hypothesized study variables do


Instrument Internal ' Staical represent the constructs they are
Validity L Validity > Validitys meant to represent; mathematical
relationships are not a function of
Established Established Estali s the instrumentation and are ex-
<~ ~~~~ ~~~~~, ,.L Established,,
plained by the hypothesized vari-
ables and only those variables.

Figure 3. Outcomes From Omitted Validities

152 MIS Quarterly/June 1989

This content downloaded from


136.233.110.66 on Sun, 24 Mar 2024 19:28:01 +00:00
All use subject to https://about.jstor.org/terms
Validating Research Instruments

that measures chosen for the study are directly tion standpoint.8 In the second place problems
tied to the molar mental constructs that answer arise in the case where researchers are adapt-
the research questions (Cook and Campbell, ing instruments that have been previously vali-
1979). To accomplish this final step, the instru- dated. In almost all cases, researchers alter
ment itself must be validated. these instruments in significant ways before ap-
plying them to the IS environment. However well-
validated an instrument may have been in its
The Need for Instrument original form, excising selected items from a vali-
dated instrument does not result in a validated
Validation in MIS derivative instrument. In fact, the more the
format, order, wording, and procedural setting
of the original instrument is changed, the greater
Before elaborating a demonstration exercise ofthe likelihood that the derived instrument will lack
instrument validation, it is important to show, as
validated qualities of the original instrument.
contended, that instruments in the MIS litera-
ture are at present, insufficiently validated. ToThe remainder of the descriptive statistics in
examine this contention, over three years of pub-Table 1 paint a similarly disturbing picture. Reli-
lished MIS empirical research (January 1985- ability is the most frequently assessed validity,
August 1988) have been surveyed. Surveyed jour-but even in this case, some 83 percent of the
nals include MIS Quarterly, Communications of studies do not test this minimal level of valida-
the ACM, and Information & Management. To tion. Reliability is infrequently coupled with tests
qualify for the sample, a study had to employof construct validity (less than 16 percent of the
either: (a) correlational or statistical manipula-studies) and assessment of content validity are
tion of variables, or (b) some form of data analy- almost unheard of (5 percent of the studies) in
sis (even if the data analysis was simply descrip-the literature.
tive statistics). Studies utilizing archival data
(e.g., citation analysis) or unobtrusive measures Although the nature and extent of validation
(e.g., system accounting measures) were omit- varied somewhat from journal to journal, a more
ted from the sample unless it was clear from revealing finding of this background study
the methodological description that key variableshowed that experimental researchers were
relationships being studied could have been sub-much less likely to validate their instruments than
mitted to validation procedures. non-experimental researchers. Laboratory and
field experiments are often pretested and piloted
to ensure that the task manipulates the subjects
as intended (manipulation checks), but the in-
Background study results struments used to gather data before and after
The survey overwhelmingly supports the con-the treatment are seldom validated. Instruments
tention that instrumentation issues are generally developed for case studies are also unlikely to
ignored in MIS research. With 117 studies in thebe validated.
sample, the survey data indicates that 62 per-
cent of the studies lacked even a single form By comparison with reference disciplines like the
of instrument validation. Table 1 summarizes administrative sciences, MIS researchers were
other key findings of the survey. less inclined to validate their instruments, accord-
ing to the study data. These ratios, moreover,
As percentages in Table 1 indicate, MIS research-are generally several orders of magnitude in dif-
ers rely most frequently (17 percent of the stud-ference. More than 70 percent of the research-
ies) on previously utilized instruments as a pri-ers in the administrative sciences report reliabil-
mary means of, presumably, validating their ity statistics, compared with 17 percent in MIS.
instruments. Almost without exception, however,
the employment of previously utilized instru-
ments in MIS is problematic from a
methodological standpoint. In the first place, 8 A weak argument can possibly be made that some
degree of nomological validity can be gained from
many previously utilized instruments were them-
employing previously utilized instruments. This is a
selves apparently never validated. The only con- very weak argument, however, because nomologi-
ceivable gain from this procedure, therefore, is cal validity usually occurs only in a long and well-
to save the time of developing a wholly new in- established stream of research, a situation that does
strument. There is no advantage from a valida- not apply in this case.

MIS Quarterly/June 1989 153

This content downloaded from


136.233.110.66 on Sun, 24 Mar 2024 19:28:01 +00:00
All use subject to https://about.jstor.org/terms
Validating Research Instruments

Table 1. Survey of Instrument Validation Use in MIS Literature*


Instrumentation
Categories Response Percentage
1. Pretest 15 Yes 102 No 13% Yes 87% No
2. Pilot 7 Yes 110 No 6% Yes 94% No
3. Pretest or pilot 22 Yes 95 No 19% Yes 81% No
4. Previously utilized 20 Yes 97 No 17% Yes 83% No
instrument
5. Content validity 5 Yes 112 No 4% Yes 96% No
6. Construct validity 16 Yes 101 No 14% Yes 86% No
7. Reliability 20 Yes 97 No 17% Yes 83% No
* A total of 117 cases were surveyed from three journals (43 from
tions of the ACM; 46 from Information & Management).

termeasures
They also validate constructs twice assuch as strict as
often sanctions against
misuse (these programs managed by a security
MIS researchers (Mitchell, 1985).
staff), or (b) prevent abusers through counter-
measures such as computer security software.
A Demonstration Exercise of The overall project set out to estimate the
damage being sustained by information systems
Instrument Validation in MIS from computer abuse and to ascertain which con-
Conceptual appreciation of the role of instrument trol mechanisms, if any, have been successful
validation in the research process is useful. But in containing losses.
the role of instrument validation may be best un-
derstood by seeing how validation can be ap- As one of the first empirical studies in the field,
plied to an actual MIS research problem - vali- the project developed testable propositions from
dation of an instrument to measure computer the baseline of the criminological theory of Gen-
abuse. This process exemplifies the variety of eral Deterrence (Nagin, 1978). Causal linkages
ways instruments might be validated, a process were postulated between exogenous variables
that is especially appropriate to confirmatory em- such as deterrent and preventive control meas-
pirical research. ures, and endogenous variables such as loss
and severity of impact. Use of theory in this
The research instrument in question was de- manner strengthens instrument development by
signed to measure computer abuse through a permitting the researcher to use prespecified and
polling of abuse victims in industry, government, identified constructs.
and education.9 Computer abuse was operation-
ally defined as misuse of information system Early in the research process, it was determined
assets such as programs, data, hardware, and that the most effective and efficient means of
computer service and was restricted to abuse achieving a statistical sample and gathering data
perpetrated by individuals against organizations was a victimization questionnaire. This type of
(Kling, 1980). There is a growing body of evi- research instrument has been used extensively
dence that the problem of computer abuse is in criminology (e.g., the National Crime Survey)10
serious (ABA, 1984) and that managers are con- and in prior computer abuse studies (AICPA,
cerned about IS security and control (Brancheau1984; Colton, et al., 1982; Kusserow, 1983;
and Wetherbe, 1987; Canning, 1986; Dickson, Parker, 1981) to explore anti-social acts. Para-
et al., 1984; Sprague and McNurlin, 1986). Or- digms for exploring anti-social behaviors are well
ganizations respond to this risk from abuse by established in criminological studies and, be-
attempting to: (a) deter abusers through coun- cause computer abuse appears to be a proto-
typical white collar crime, research techniques
from this reference discipline were appropriate
9 This research was supported by grants from IRMIS in this study.
(Institute for Research on the Management of Infor-
mation Systems) and the Ball Corporation Founda-
tion. It was accomplished under the auspices of the
International DPMA (Data Processing Management
Association). '1 Cf. Skogan (1981).

154 MIS Quarterly/June 1989

This content downloaded from


136.233.110.66 on Sun, 24 Mar 2024 19:28:01 +00:00
All use subject to https://about.jstor.org/terms
Validating Research Instruments

An obvious reference discipline for activities thatIn the literature, observable commitment of an
involve a violation of social codes is criminol- enforcement group, such as the police in pun-
ogy, which provides a ready behavioral expla- ishing offenders, typically serves as a surrogate
nation for why deterrents may be effective con- for perception of risk or certainty of sanction
trols. General Deterrence theory has well- (Gibbs, 1975). This assumes that potential of-
established research constructs and causal re- fenders perceive risk to be in direct proportion
lationships. There is a long-standing tradition of to efforts to monitor and uncover illicit behav-
research in this area and concurrence by panels iors. In other words, people believe that punish-
of experts on the explanatory power of the theory ment will be more certain when enforcement
(Blumstein, et al., 1978; Cook, 1982). Constructs agents explicitly or implicitly "police" or make
and measures have been developed to test the their presence felt to potential offenders. In in-
theory since the early 1960s, and its application formation systems, this is equivalent to security
to the computer security environment, therefore, administrators making their presence felt through
seemed timely. monitoring, enforcing, and disturbing information
about organizational policies regarding system
The thrust of most of the theoretic deterrence usage, or what this article refers to as deterrent
literature has been on "disincentives" or sanc- countermeasures. When punishment is severe,
tions against committing a deviant act. Disincen- it is assumed that offenders, especially less mo-
tives are traditionally divided into two related, tivated potential offenders, are dissuaded from
but independent, conceptual components: (1) cer- anti-social acts (Straub and Widom, 1984). Table
tainty of sanction and (2) severity of sanction 2 presents the pertinent connections between
(Blumstein, et al., 1978). The theory holds that the conceptual terminology used in this article,
under conditions in which risk of being punished the constructs most frequently cited in General
is high and penalties for violation of norms are Deterrence theory, and the actual items as meas-
severe, potential offenders will refrain from illicit ured in the final instrument, which appears in
behaviors. the Appendix.

Table 2. Concepts, Constructs, and Measures


Research Survey
Concepts Construct Item Measure Description
25 - Number of incidents

Abuse Damage 3839 - Actual dollar loss


%Abuse Damage 38 - Opportunity dollar loss
37 - Subjective seriousness index
10 - Full-time security staff
11 - Part-time security staff
12 - Total security hours/week
Disincentives: 14b - Data security hours/week
Certainty 15 - Total security staff salaries
Deterrents 22 - Subjective deterrent effect
13 minus 35 & - Longevity of securit
13 minus 28 minus 36 inception to inci
Disincentives: 18 - Information about proper use
Severity 19 - Most severe penalty for abuse
22 - Subjective deterrent effect
Preventives Preventives 16 - Use of software access control
17 - Use of specialized software
30 - Privileged status of offender
29 - Amount of collusion
Rival Environmental- 32 - Motivation of offender
Explanations Motivational 31 - Employee/non-employee status
Factors 24 - Tightness of security
21 - Visibility of security
28 minus 35 & - Duration of abuse
36

MIS Quarterly/June 1989 155

This content downloaded from


136.233.110.66 on Sun, 24 Mar 2024 19:28:01 +00:00
All use subject to https://about.jstor.org/terms
Validating Research Instruments

Evolving directly from prior instruments, a draft The in-depth interviews offered insight into the
instrument was first constructed to reflect study functioning of the entire spectrum of security con-
constructs. Next, instrument validation took place trols and what respondents themselves indicated
in three well-defined operations. The entire re- were important elements in their deterrent force.
search process was conducted in four phases, Interviews were designed to move progressively
as outlined in Table 3. from an open-ended general discussion format,
to a semi-structured format, and finally to a
highly structured item-by-item examination of the
draft instrument. Information gathered earlier in
Phase I: Pretest
the interview did not, thereby, bias responses
In the pretest, the draft instrument was subjected later when the draft instrument was evaluated.
In the beginning of the interview, participants
to a qualitative testing of all validities. This phase
was designed to facilitate revision, leading towere
an encouraged to be discursive. They spoke
instrument that could be formally validated. on all aspects of their computer security, includ-
ing personnel, guidelines and policies, software
Personal interviews were conducted with 37 par-controls, reports, etc. They were prompted by
ticipants in order to locate and correct weak- a simple request for information. Concepts in-
nesses in the questionnaire instrument. The in-
dependently introduced by more than three re-
terview schedule was structured so that three
spondents were noted as was the precise lan-
full waves of interviews could be conducted.
guage in which these constructs were perceived
Each version of the instrument reflected improve- by the participants (content validity and reliabil-
ments suggested by participants up to that point ity). In the second semistructured segment, ques-
and was continuously re-edited for the next ver- tions from the interviewer directed attention to
sion. The selection of interviewees was designed key matters of security raised by other partici-
to get maximum feedback from various exper- pants but not raised in the current session. Clari-
tises, organizational roles, and geographical re- fication of constructs and the means of opera-
gions. Initial interviews included divergent areas tionalizing selected constructs were also
of expertise: academic experts in methodology undertaken in this segment (construct validity
and related criminological areas, academic ex- and reliability).
perts in information systems, practitioner experts
in information systems and auditing, and law en- To eliminate ambiguities and further test validi-
forcement officials at the state and federal levels. ties, participants in the third segment of the in-
Participants came from a variety of organizations terview were asked to evaluate a version of the
in the public and private sectors including bank- questionnaire item-by-item. Content validity was
ing, insurance, manufacturing, trade, utilities, stressed in this segment by encouraging partici-
transportation, education, and health services. pants to single out pointless questions and sug-
A range of key informants were sought includ- gest new areas for inquiry. Most participants
ing system managers, operations managers, chose to dialogue in a running commentary
data security officers, database administrators, format as they reviewed each question. This fa-
and internal auditors. cilitated preliminary testing of the other validi-

Table 3. Phases of the Computer Abuse Measurement Project


Validation Content Construct
Phase Name Tests Performed Validity Validity Reliability
I Pretest Qualitative X X X

II Technical Cronbach alphas X


Validation MTMM Analysis X X
Ill Pilot Test Cronbach alphas X
Factor Analysis X
Qualitative X
IV Full-Scale
Victimization
Survey

156 MIS Quarterly/June 1989

This content downloaded from


136.233.110.66 on Sun, 24 Mar 2024 19:28:01 +00:00
All use subject to https://about.jstor.org/terms
Validating Research Instruments

ties. Because misunderstanding of questions different," in accordance with the Campbell and
would contribute to measurement error in the Fiske criteria. Triangulation by dissimilar meth-
instrument, for instance, particular attention was ods is designed to isolate common method vari-
paid to possible discrepancies or variations in ance and assure that the Campbell and Fiske
answers (reliability). assumption of independence of methods is not
violated. A personal interview was conducted
By the final version, the draft instrument had un-
with one participant while a pencil-and-paper in-
dergone a dramatic metamorphosis. It had col-
strument (the pretested questionnaire) was given
lected data in every class of the relevant vari-
to an equally knowledgeable participant in the
ables. To provide sufficient data points to assess
same organization. During the interactive inter-
reliability, several measures of each significant
viewing process, the researcher verbally pre-
independent variable as well as each depend-
ent variable were included.
sented the questions and recorded responses
in a 1-2 hour timeframe. A limited amount of
consultation was permitted, but, in general, re-
spondents were encouraged to keep their own
Phase I1: Technical validation counsel. Questionnaire respondents, on the
In keeping with the project plan, technical in- other hand, had no personal contact with the
strument validation occurred during the fall of researcher and responded entirely on their own.
1984 and the winter of 1985 (Phase II). Its pur- This instrument was completed at leisure and
pose was to validate construct validity and reli- returned by mail to the researcher. In all in-
ability. To triangulate on the postulated con- stances, the need for independence of answers
structs (construct validity), extremely dissimilar from the two types of respondents was stressed.
methods (Campbell and Fiske, 1959) were util-
If measures vary little from the pencil-and-paper
ized. The purpose in this case is similar to the
instrument to the personal interview, they can
purpose of triangulation in research in general
be said to be independent of the methods used
(Cook and Campbell, 1979). By bringing very
to gather the data and to demonstrate high con-
different data-gathering methods to bear on a
struct validity. Conversely, high method variance
phenomenon of interest, it is possible, by com- indicates that measures are more a function of
paring results, to determine the extent to which
the instrumentation than of underlying con-
instrumentation affects the findings, i.e., how
structs. As an analytical technique, Campbell
robust the findings truly are.
and Fiske's MTMM allows patterns of selected,
Specifically, personal interviews were conducted individual measures to be compared through a
with 44 key security informants. The target popu- correlation matrix of measures by methods, as
lation was primarily information system manag- shown in Figure 4.11 Analysis of the MTMM
ers, computer security specialists, and internal matrix in Figure 4 shows generally low method
auditors. Their responses were correlated with variance and high convergent/discriminant va-
questionnaire responses made independently by lidity. This can be seen most directly by com-
other members of the organization who had paring the values in the validity diagonal - the
equal access to security information. The instru- homotrait-heteromethod diagonal encircled in the
ment also included equivalent, "maximally simi- lower left matrix partition or block - with values
lar" measures (Campbell, 1960, p. 550) to gauge in the same row or column of each trait. Evi-
the extent of the random error (reliability). dence in favor of what is termed "convergent
validity" is a relatively high, statistically signifi-
In Phases II and III the instrument was quantita- cant correlation in the validity diagonal. Evidence
tively validated along several methodological di- in favor of "discriminant validity" occurs if that
mensions. These dimensions and analytical tech- correlation is higher than other values in the
niques are summarized in Table 3. same row or column, i.e., r(i,i) > r(i,j) and r(i,i)
> r(j,i) where i is not equal to j. The total per-
sonnel hours trait (item 11), for example, has
Technical Validation of Construct Validity a .65 correlation significant at the .05 level and
Tests of construct validity are generally intended
to determine if measures across subjects are The matrix in Figure 4 is only a partial matrix of
similar across methods of measuring those vari- all the correlations evaluated. These matrix ele-
ables. In the Computer Abuse Measurement Pro- ments were chosen simply for purposes of
ject, methods were designed to be "maximally demonstration.

MIS Quarterly/June 1989 157

This content downloaded from


136.233.110.66 on Sun, 24 Mar 2024 19:28:01 +00:00
All use subject to https://about.jstor.org/terms
gm ~~~~~~~~~~Interview Pencil-and-Pa
2 8 9 10 11 14 19 21 27 29 34 2 8 9 10 11

03
C - 86 -'^Legend
C40 '2... Item 2, Years experi
I,,< 9 -.01 -62 93 8... Item 8, EDP budget per ye (c)~~~

9... Item 9, S
11 -.07 -.55 9* 10... Item 10, Staff members working
1**. .1 11-.07 -.55 .69* .2 .8711... Item 11, Total personnel h
f4D * IS^"14... Item 14, Total expenditures per
0o 4 14 -.01-.62 .69* .22 .84 81 19... Item 19, Opinion-Does your computer se
19n -.13 -.05 r .12 -.06 n0 121... Item 21, Opinion-Is your computer
19 - .13d- .05 - .12 - .06 - .09 - .13 -- 27... Item 27, Position of per
21 -.21 .15 -.04 -.10 -.02 -.02 .35s- 29... Item 29, Motivation of perpe
34... Item 34, Sub
27 -.41 -.09 .06 .07 .22 .16 -.07 -.52 -- *Positive significance at .0
29 -.15 -.26 .21 .11 -.05 .20 .22 .67* -.61 --
CD~~~~~3

34 -.30 -.17 .28 .04 .42* .35 -.05 -.26 .73* -.38 --

2 .2 07 .00 -.14 -.04 -.12 .18 -.16 -.24 .00-55 --

8 -.10 .80 .58 -.17 -.45 -.47 .18 .22 .00 -.18 -.23 .19 82

9 -.20 -.54 .70 .29* .60* .60* -.11 -.11 .19 .14 .20 -.04 -.40 94

10 -.20 -.24 .0 52* .18 .08 -.19 -.09 .34 -.07 .34 -.26 -.31 -.01 94

11 -.18 -.69 57* .41 .65* .62* -.03 -.13 .14 .23 .04 .03 -.47 .67* .35 86

14 .05 -.57 .34 .36* .39 .63* 14 -.16 .00 .27 .25 -.11 -.55 .56* .08 .5

19 .14 -.17 .14 -.11 .06 .03 .1 .12 .18 -.35 .39 .01 -.12 -.14 .03 -.02

21 -.15 .02 .00 -.17 -.10 -.13 .08 25 .62 .18 -.36 -.13 -.04 -.23 -.05 -.16

27 -.07 -.14 .45 .19 .46* .30 .17 -.08 74* 4 .41 -.31 -.48 .21 .30 .12

29 -.35 -.10 .33 -.15 -.09 .17 .41 .73* .06 77* .12 .13 -.12 .16 -.23 .06

34 -.47 -.03 .42 -.35 .16 .31 .36 .17 .60* -.05 -.29 -.32 .18 -.04 -.14

2 8 9 10 11 14 19 21 27 29 34 2 8 9 10 11 14

Figure 4. MTMM Matrix

This content downloaded from


136.233.110.66 on Sun, 24 Mar 2024 19:28:01 +00:00
All use subject to https://about.jstor.org/terms
Validating Research Instruments

is greater than other entries in its row and the interview-based monomethod correlation be-
column. tween items 11 and 14 (r=.84) is significantly
elevated over the parallel heteromethod corre-
In the MTMM test, the test instrument failed to lation between items 11 and 14 (r=.39). The
achieve convergent validity on three out of the elevation of .84 over .39 is an index of the
11 items (items 2, 19 and 21) as evidenced by degree of common method variance, which ap-
low, non-significant correlations in the validity di- pears to be substantial. Basically, .39 is the cor-
agonal (r=.25, .21, .25 respectively). Trait 2, as relation between items 11 and 14 with the
a simple indicator of the number of years of ex- common method variance due to interviewing
perience of the respondent, however, should dem- removed. Similarly, the pencil-and-paper-based
onstrate only random associations, overall, with monomethod correlation between items 9 and
both itself and other traits in the homomethod 11, r=.67, should be compared against the cor-
or heteromethod blocks. That is, there is no par- responding heteromethod correlation of .57. Ex-
ticular reason why two participants from the amining other high correlations in the
same organization should have had the same monomethod triangles and comparing them to
years of experience in information systems, an their heteromethod counterparts suggest that
interpretation borne out by the non-significant va- items 9, 11, and 14 do demonstrate common
lidity diagonal value for this trait. By virtue of method variance.
the validation design, traits 19 and 21 may not
be as highly correlated in the validity diagonal Perhaps part of the problem is that subjects are
as the objective measures (traits 9, 10, 11, and confounded with methods in the design. The
14), since these are subjective measures from people who responded to the interview were a
very different sources of information. It is rea- different group than those who filled out the
sonable to expect that two people in the same pencil-and-paper questionnaire. Thus, there is
organization can give very similar objective an- likely a systematic effect because of "person,"
swers about security efforts (e.g., traits 9, 10, apart from the effect of method per se, that ac-
11, and 14); it is also reasonable to expect them counts for this pattern of findings.12 If a person
to vary on opinions about the effectiveness of overestimates the number of staff, it is likely he
these efforts. And the data unveils this pattern. or she will also overestimate total personnel
hours. Thus, one might expect a person's esti-
Even though the instrument, upon inspection, mate of staff to correlate more highly with their
does pass the first Campbell-Fiske desideratum own estimate of personnel hours than with some-
one else's estimate of staff. If this correlation
for convergent validity, there are several viola-
tions of discriminant validity. In the interview ho- is a true accounting of the real world, then
momethod block, the correlation r(14,11), for in- "person" is a source of shared method variance,
stance, is higher at .84 than the validity diagonal i.e., errors that are not statistically independent
of each other that are common within a method
value of .65. Likewise, in the pencil-and-paper
homomethod block, r(11,9) exceeds its corre- but not across two methods. Unfortunately, with
sponding validity diagonal value of .65. Interpre- the existing design, it is not possible to disen-
tations of aberrations such as these can be dif- tangle the effects due to people from the effects
ficult (Farh, et al., 1984; Marsh and Hocevar, due to the instrument per se.
1983). For one thing, it is well-known that high
Even though this design does fulfill the Campbell-
but spurious correlations will occur by chance
Fiske (1959) criterion for "maximally different
alone in large matrices such as the one in Figure
4 (231 elements). Yet, in spite of reasonable or
otherwise ingenious explanations that can be of-
fered for off-diagonal high correlations, it may 12 The effect here might have been mitigated if the
be more straightforward in MTMM analysis to validation sample had been randomly selected from
classify departures from the technical criteria the population of interest. Random selection of sub-
simply as violations. As long as violations do not jects or participants from the population as well as
random assignment to treatments (in the case of
completely overwhelm good fits, the instrument
experimental research) reduces the possibility that
can be said to have acceptable measurement systematic effects of "person" (as in "individual dif-
properties.
ferences") are present in the data. Random selec-
tion affects external validity whereas random assign-
An analysis of common method variance is the ment affects internal validity (Cook and Campbell,
last procedure in evaluating this matrix. Note that 1979).

MIS Quarterly/June 1989 159

This content downloaded from


136.233.110.66 on Sun, 24 Mar 2024 19:28:01 +00:00
All use subject to https://about.jstor.org/terms
Validating Research Instruments

methods," common method variance detracts Construct validity is assessed in a pilot in


from the "relative validity" (p. 84) that is gener- ment by establishing the factorial validity (
ally found throughout the matrix. In this study, and Yen, 1979) of constructs. This techni
if both methods had been administered to each which has been utilized in the UIS (e.g.,
respondent as one solution to evaluating et al., 1983) and IS job satisfaction arenas
common method variance, a new source of con- Ferratt and Short, 1986), has become pop
founding - a test-retest bias (Cook and as a result of the sophisticated statistical
Campbell, 1979) - would have been bilities of factor analysis. Factor analysis a
introduced. the researcher to answer questions about w
measures covary in explaining the highest
centage of the variance in a dataset. Th
Technical Validation of Reliability searcher can approach the question with p
pal components factor analysis or confirm
Essentially, reliability is a statement about the
factor
stability of individual measures across replica- analysis.13 Thus, factorial validity he
confirm that a certain set of measures do or do
tions from the same source of information (within
not reflect latent constructs.
subjects in this case). For example, a respon-
dent to the questionnaire who indicated that his
A principal components factor analysis of the
or her overall organization had sales of $5 bil-
same subset of variables illustrated in the MTMM
lion plus but only 500-749 employees would
analysis shows that measures of the computer
almost certainly have misinterpreted one or both
security deterrent construct (items 9, 10, 11, and
of the questions. If enough respondents were
14) all contribute heavily to or "load" on a single
inconsistent in their answers to these items, for
factor, as shown in Table 4. Once the first factor
this and other reasons, the items would contain
is extracted, the analytical technique attempts
abnormally high measurement error and hence, to find other factors or sets of variables that best
unreliable measures. Contrariwise, if individual
explain variance in the dataset. After 4 such ex-
measures are reHiable for the most part, the
tractions (with eigenvalues of at least 1.0), the
entire instrument can be said to have minimal
selected measures load at a .5 cutoff level; to-
measurement error. Findings based on a reli-
gether the loadings explain 97 percent of the
able instrument are better supported, and pa- variance in the dataset. In brief, results of this
rameter estimates are more efficient.
test support the view that measures of deter-
Coefficients of internal consistency and alterna- rence in the questionnaire are highly interrelated
tive forms were used to test the instrument. Both and do constitute a construct.

are explained in depth in Churchill (1979) and


Bailey (1978). Cronbach alphas reported in the
diagonal of the MTMM matrix pass the .80 rule-
Table 4. Loadings for Factor Analysis
of-thumb test used as a gauge for reliable of Pilot Instrument
measures.
Factor Factor Factor Factor
1 2 3 4
Item 9 .972
Phase Il1: Pilot test of reliability and
Item 10 .943
Item 11 .939
construct validity Item 14 .863
To further validate the instrument, a pilot survey
Item 2 .956
of randomly selected DPMA (Data Processing
Item 19 .910
Management Association) members wasItem
carried
21 .885
out in January of 1985. Judging fromItem
the 170
27 .847
returned questionnaires, the pilot test once again
confirmed that measurement problems inItem the in- 34 .961
strument were not seriously disabling. Item 29 .897

The instrument was first tested for reliability


using Cronbach alphas and Pearson correla-
tions. The variables presented in the MTMM
analysis passed the .80 rule-of-thumb test
13 with
One of the mos
coefficients of .938, .934, .982, and .941.factor analysis to

160 MIS Quarterly/June 1989

This content downloaded from


136.233.110.66 on Sun, 24 Mar 2024 19:28:01 +00:00
All use subject to https://about.jstor.org/terms
Validating Research Instruments

Besides their use in validation, pilot tests are nology-driven field such as MIS, windows of op-
also desirable because they provide a testing portunity for gathering data appear and disap-
ground or dry run for final administration of the pear so rapidly that researchers often feel they
instrument. In this case, for example, some items cannot afford the time required to validate their
were easily identified as problematic by virtue data collection instruments. Researchers engag-
of their lack of variance, or what are frequently ing in initial studies or exploratory studies such
called "floor" or "ceiling" effects. Items 8 and as case studies may feel that validated instru-
9 were originally scaled in such a way that an- ments are not critical. Experimentalists concen-
swers tended to crowd together at the high end trating on legitimate concerns of internal validity
of the scale, i.e., the scaling of these items did (Jarvenpaa, et al., 1984), moreover, may not re-
not allow for discrimination among large alize that their pre- and posttreatment data-
organizations. gathering instruments are, in fact, sine qua non
means of controlling extraneous sources of
Other problems that need to be resolved can variation.
also surface during pilot testing. A good exam-
ple in this validation was an item on "Annual Nevertheless, it is desirable for MIS as a field
cost of computer security insurance." Most re- - experimentalists and case researchers not ex-
spondents in the field interviews assumed that cepted (Campbell, 1975; Fromkin and Streufert,
even though their organization did not have spe- 1976) - to apply more rigorous scientific stan-
cific insurance coverage for violations of com- dards to data collection procedures and instru-
puter security, others did. And they also as- ments (Mitchell, 1985). As an initial step, a set
sumed that such costs could be estimated, of minimal guidelines may be considered as
Given the media attention to security insurancefollows:
matters in the last several years, these assump-
* Researchers should pretest and/or pilot test
tions are quite understandable. The pilot survey
data clearly showed that the question was badly instruments, attempting to assess as many va-
lidities as possible in this process.
misjudged for content; there was virtually no vari-
ance at all on this item. Over 95 percent of the
* MIS journal editors should encourage or re-
respondents left this item blank; some of thosequire researchers to prepare an "Instrument
that did respond indicated through notations inValidation" subsection of the Methodology sec-
the margins that they were not sure what the tion; this subsection can include, at the least,
question meant. reliability tests and factorial validity tests of the
The overall assessment of these validation tests final administered instrument.

was that the instrument had acceptable meas- As instrumentation issues become more inter-
urement properties. It meant, in essence, that nalized in the MIS research process, more strin-
much greater confidence could be placed in later gent standards can be adopted as follows:
use of the instruments.
* Researchers should use previously validated
instruments wherever possible, being careful
Phase IV: Full scale victimization not to make significant alterations in the vali-
survey dated instrument without revalidating instru-
ment content, constructs, and reliability.
The full scale implementation of the survey in-
strument resulted in 1,211 responses and 259 * Researchers should undertake technical vali-
separate incidents of computer abuse. As a dation whenever it is critical to ground central
result of the work undertaken prior to this phase constructs in major MIS research streams.
of the research, it was felt thate stronger infer-
ences could be made about causal relationships
among variables and about theory.
Validating Instruments for
Use in MIS Research
Guidelines for Validating Streams
Instruments in MIS Research There are numerous research streams in MIS
Clearly, numerous contingencies affect the col- that would gain considerable credibility from
lection and evaluation of research data. In a tech- more carefully articulated constructs and meas-

MIS Quarterly/June 1989 161

This content downloaded from


136.233.110.66 on Sun, 24 Mar 2024 19:28:01 +00:00
All use subject to https://about.jstor.org/terms
Validating Research Instruments

ures. System success, as probably the central spite of growing awareness within the MIS field
performance variable in MIS, is a prime candi- that methodologies need to be more rigorous,
date. It is the effect sought in large scale trans? most of the empirical studies continue to use
actional processing systems (Ives and Olson, largely unvalidated instruments. Even though
1984), decision support systems (Poole and De- MIS researchers frequently adopt instruments
Sanctis, 1987), and user-developed applications that have been used in previous studies, a
(Rivard and Huff, 1988). Although instruments methodological approach that can significantly
measuring certain dependent variables such as undergird the research effort, these advantages
UIS have been subjected to validation (e.g., Ives, are lost in most instances either because the
et al., 1983), there have been few, if any, vali- adapted instrument itself has not been validated
dations of instruments measuring other compo- or because the researcher has made major al-
nents of system success (such as system ac- terations in the validated instrument without re-
ceptance or systems usage), or independent testing it.
variables (such as user involvement) (Ives and
Olson, 1984). Varying components of user in-
It is important for MIS researchers to recognize
volvement have been tested, but no validation
that valid statistical conclusions by no means
of the construct of user involvement has yet
ensure that a causal relationship between vari-
been undertaken. Recent studies (Baronas and
ables exists. It is also important to realize that,
Louis, 1988; Tate and Vessey, 1988; Yaver-
in spite of the need to warranty internal validity,
baum, 1988) have employed altered forms of this validation does not test whether the research
previously validated instruments (i.e., the
instrument is measuring what the researcher in-
Baroudi, Ives, and Olson instrument and Job Di-
tended to measure. Measurement problems in
agnostic Survey), but the only tests of the user
MIS can only be resolved through instrument
involvement construct have been reliability validation.
statistics.

Basic macro-level constructs in the field, con-


structs like "information" and "information value," This article argues that instrument validation at
are still in need of validation and further refine- any level can be of considerable help to MIS
researchers in substantiating their findings. Spe-
ment. Epstein and King (1982), Larcker and
cific guidelines for improvements include pre and
Lessig (1980), O'Reilly (1982), Swanson (1974;
pilot testing, formal validation procedures, and,
1986; 1987), and Zmud (1978) all deal with im-
finally, close imitation of previously validated in-
portant questions surrounding data, information,
struments. As demonstrated in the case of the
information attributes, and information value. But
Computer Abuse Measurement Project, instru-
until Goodhue (1988), no technical validation
ment pretests can be useful in qualitatively es-
effort (including MTMM analysis) had been un-
tablishing the reliability, construct validity, and
dertaken to clarify this stream of research.
content validity of measures. Formal validation,
There are, in fact, whole streams of research either MTMM analysis or factor analysis, offers
in the field where primary research instruments statistical evidence that the instrument itself is
remain unverified. Research programs explor- not seriously interfering with the gathering of ac-
ing the value of software engineering techniques curate data. Pilot tests can permit testing of reli-
(Vessey and Weber, 1984), life cycle enhance- ability and construct validity, identify and help
ments such as prototyping (Jenkins and Byrer, correct scaling problems, and serve as dry runs
1987), and factors affecting successful end- for final administration of the instrument.
user computing (Brancheau, 1987) are all cases
in point. MIS research, moreover, suffers from
measurement problems in exploring variable re- Improving empirical MIS research is a two part
lationships among variables such as information process. First, we must recognize that we have
quality, secure systems, user-friendly systems, methodological problems that need to be ad-
MIS sophistication, and decision quality. dressed, and second, and even more important,
we must take action by incorporating instrument
validation into our research efforts. Serious at-
tention to the issues of validation can move the
Conclusion
field toward meaningfully replicated studies and
Instrument validation is a prior and primary proc-
refined concepts and away from intractable con-
ess in confirmatory empirical research. Yet, in
structs and flawed measures.

162 MIS Quarterly/June 1989

This content downloaded from


136.233.110.66 on Sun, 24 Mar 2024 19:28:01 +00:00
All use subject to https://about.jstor.org/terms
Validating Research Instruments

References terly (11:1), March 1987, pp. 23-45.


Campbell, D.T. "Recommendations for APA Test
ABA. "Report on Computer Crime," pamphlet, Standards Regarding Construct, Trait, and Dis-
American Bar Association, Task Force on Com- criminant Validity," American Psychologist
puter Crime, Section on Criminal Justice, 1800 (15), August 1960, pp. 546-553.
M Street, Washington, D.C. 20036, 1984. Campbell, D.T. "Degrees of Freedom and the
AICPA. "Report on the Study of EDP-Related Case Study," Comparative Political Studies
Fraud in the Banking and Insurance Indus- (8:2), July 1975, pp. 178-191.
tries," pamphlet, American Institute of Certi- Campbell, D.T. and Fiske, D.W. "Convergent
fied Public Accountants, Inc., 1211 Avenue of and Discriminant Validation by the Multitrait-
the Americas, New York, NY, 1984. Multimethod Matrix," Psychological Bulletin
Allen, M.J. and Yen, W.M. Introduction to Meas- (56), March 1959, pp. 81-105.
urement Theory, Brooks-Cole, Monterey, CA, Canning, R. "Information Security and Privacy,"
1979.
EDP Analyzer (24:2), February 1986, pp. 1-
Bagozzi, R.P. "The Role of Measurement in 16.
Theory Construction and Hypothesis Testing: Churchill, G.A., Jr. "A Paradigm for Developing
Toward a Holistic Model," in Conceptual and Better Measures of Marketing Constructs," Jour-
Theoretical Developments in Marketing, Amenri- nal of Marketing Research (16), February
can Marketing Association, Chicago, IL, 1979. 1979, pp. 64-73.
Bagozzi, R.P. Causal Modelling in Marketing, Cohen, J. Statistical Power Analysis for the Be-
Wiley & Sons, New York, NY, 1980. havioral Sciences, Academic Press, New
Bailey, K.D. Methods of Social Research, The York, NY, 1969.
Free Press, New York, NY, 1978. Cohen, J. Statistical Power Analysis for the Be-
Bailey, J.E. and Pearson, S.W. "Development havioral Sciences, Revised Edition, Academic
of a Tool for Measuring and Analysing Com- Press, New York, NY, 1977.
puter User Satisfaction," Management Sci- Colton, K.W., Tien, J.M., Tvedt, S., Dunn, B. and
ence (29:5), May 1983, pp. 530-545. Barnett, A.I. "Electronic Funds Transfer Sys-
Baronas, A.K. and Louis, M.R. "Restoring a tems and Crime," interim report in ongoing
Sense of Control During Implementation: How study on "The Nature and Extent of Criminal
User Involvement Leads to System Accep- Activity in Electronic Funds Transfer and Elec-
tance," MIS Quarterly (12:1), March 1988, pp. tronic Mail Systems," supported by Grant 80-
111-126.
BJ-CX-0026, U.S. Bureau of Justice Statistics,
Baroudi, J.J. and Orlikowski, W.J. "The Prob- 1982. Referenced by special permission.
lem of Statistical Power in MIS Research," Cook, P.J. "Research in Criminal Deterrence:
MIS Quarterly (13:1), March 1989, pp. 87- Laying the Groundwork for the Second
106.
Decade," in Crime and Justice: An Annual
Benbasat, I., Goldstein, D.K. and Mead, M. "The Review of Research (2), University of Chicago
Case Research Strategy in Studies of Infor- Press, Chicago, IL, 1982, pp. 211-268.
mation Systems," MIS Quarterly (11:3), Sep- Cook, T.D. and Campbell, D.T. Quasi-Experimen-
tember 1987, pp. 369-388. tation: Design and Analytical Issues for Field
Blalock, H.M., Jr. Theory Construction: From Settings, Rand McNally, Chicago, IL, 1979.
Verbal to Mathematical Formulations, Prentice- Coombs, C.H. A Theory of Data, Wiley, New
Hall, Englewood Cliffs, NJ, 1969. York, NY, 1964.
Blumstein, A., Cohen, J. and Nagin, D. "Intro- Cronbach, L.J. "Coefficient Alpha and the Inter-
duction," in Deterrence and Incapacitation: Es- nal Consistency of Tests," Psychometrika
timating the Effects of Criminal Sanctions on (16), September 1951, pp. 297-334.
Crime Rates, A. Blumstein, J. Cohen and D. Cronbach, L.J. "Test Validation," in Educational
Nagin (eds.), National Academy of Sciences, Measurement, 2nd Edition, R.L. Thorndike
Washington, D.C., 1978. (ed.), American Council on Education, Wash-
Brancheau, J.C. The Diffusion of Information ington, D.C., 1971, pp. 443-507.
Technology: Testing and Extending Innova- Cronbach, L.J. and Meehl, P.E. "Construct Va-
tion Diffusion Theory in the Context of End- lidity in Psychological Tests," Psychological
User Computing, unpublished Ph.D. disserta- Bulletin (52), 1955, pp. 281-302.
tion, University of Minnesota, 1987. Dickson, G.W., Benbasat, I. and King, W.R. "The
Brancheau, J.C. and Wetherbe, J.C. "Key Issues Management Information Systems Area: Prob-
in Information Systems - 1986," MIS Quar- lems, Challenges, and Opportunities," Proceed-

MIS Quarterly/June 1989 163

This content downloaded from


136.233.110.66 on Sun, 24 Mar 2024 19:28:01 +00:00
All use subject to https://about.jstor.org/terms
Validating Research Instruments

ings of the First International Conference on urement of User Information Satisfaction," Com-
Information Systems," Philadelphia, PA, De- munications of the ACM (26:10), October
cember 8-10, 1980, pp. 1-8. 1983, pp. 785-793.
Dickson, G.W., Leitheiser, R.L., Wetherbe, J.C. Jarvenpaa, S.L., Dickson, G.W. and DeSanctis,
and Nechis, M. "Key Information Systems G.L. "Methodological Issues in Experimental
Issues for the 80's," MIS Quarterly (8:3), Sep- IS Research: Experiences and Recommenda-
tember 1984, pp. 135-159. tions," Proceedings of the Fifth International
Epstein, B.J. and King, W.R. "An Experimental Information Systems Conference, Tucson, AZ,
Study of the Value of Information," Omega November 1984, pp. 1-30.
(10:3), 1982, pp. 249-258. Jenkins, A.M. MIS Design Variables and Deci-
sion Making Performance, UMI Research
Farh, J., Hoffman, R.C. and Hegarty, W.H. "As- Press, Ann Arbor, Ml, 1983.
sessing Environmental Scanning at the Sub- Jenkins, A.M. "Research Methodologies and
unit Level: A Multitrait-Multimethod Analysis," MIS Research," in Research Methods in In-
Decision Sciences (15), 1984, pp. 197-220. formation Systems, E. Mumford (ed.), Elsevier
Farhoomand, A. "Scientific Progress of Manage- Science Publishers B.V. (North-Holland), Am-
ment Information Systems," Data Base, sterdam, 1985, pp. 315-320.
Summer 1987, pp. 48-56. Jenkins, A.M. and Byrer, J. "An Annotated Bibli-
Ferratt, T.W. and Short, L.E. "Are Information ography on Prototyping," Working Paper W-
Systems People Different: An Investigation of 706, IRMIS (Institute for Research on the Man-
Motivational Differences," MIS Quarterly agement of Information Systems), Indiana Uni-
(10:4), December 1986, pp. 377-388. versity Graduate School of Business, Blooming-
Fromkin, H.L. and Streufert, S. "Laboratory Ex- ton, IN, 1987.
perimentation," in Handbook of Industrial and Kerlinger, F.N. Foundations of Behavioral Re-
Organizational Psychology, Marvin D. Dun- search, Holt, Rinehart, and Winston, New
nette (ed.), Rand McNally, Chicago, IL, 1976. York, NY, 1964.
Gibbs, J. Crime, Punishment, and Deterrence, King, W.A. "Strategic Planning for Information
Elsevier, New York, NY, 1975. Systems," research presentation, Indiana Uni-
Glaser, B.G. and Strauss, A.L The Discovery versity School of Business, March 1985.
of Grounded Theory: Strategies for Qualita- Kling, R. "Computer Abuse and Computer Crime
tive Research, Aldine, New York, NY, 1967. as Organizational Activities," Computer Law
Goodhue, D.L. Supporting Users of Corporate Journal (2:2), 1980, pp. 186-196.
Data: The Effect of I/S Policy Choices, un- Kraemer, H.C. and Thiemann, S. How Many Sub-
published Ph.D. dissertation, Massachusetts jects? Statistical Power Analysis in Research,
Institute of Technology, 1988. Sage, Newbury Park, CA, 1987.
Hair, J.F., Jr., Anderson, R.E., Tatham, R.L. and Kusserow, R.P. "Computer-Related Fraud and
Grablowsky, B.J. Multivariate Data Analysis, Abuse in Government Agencies," pamphlet,
PPC Books, Tulsa, OK, 1979. U.S. Dept. of Health and Human Services,
Hamilton, J.S. and Ives, B. "MIS Research Strate- Washington, D.C., 1983.
gies," Information and Management (5:6), De- Larcker, D.F. and Lessig, V.P. "Perceived Use-
cember 1982a, pp. 339-347. fulness of Information: A Psychometric Exami-
Hamilton, J.S. and Ives, B. "Knowledge Utiliza- nation," Decision Sciences (11:1), 1980, pp.
tion among MIS Researchers," MIS Quarterly 121-134.
(6:4), December 1982b, pp. 61-77. Lincoln, Y.S. and Gupta, E.G. Naturalistic In-
Hanushek, E.A. and Jackson, J.E. Statistical quiry, Sage, Beverly Hills, CA, 1985.
Methods for Social Scientists, Academic Lindman, H.R. ANOVA in Complex Experimen-
Press, Orlando, FL, 1977. tal Designs, W.H. Freeman, San Francisco,
Hunter J.E., Schmidt, F.L. and Jackson, G.B. CA, 1974.
Meta-Analysis: Cumulating Research Find- Long, J.S. Confirmatory Factory Analysis, Sage,
ings Across Studies, Sage, Beverly Hills, CA, Beverly Hills, CA, 1983.
1983. Mackenzie, K.D. and House, R. "Paradigm De-
Ives, B. and Olson, M.H. "User Involvement and velopment in the Social Sciences," in Re-
MIS Success: A Review of Research," Man- search in Organizations: Issues and Contro-
agement Science (30:5), May 1984, pp. 586- versies, R.T. Mowday and R.M. Steers (eds.),
603. Goodyear Publishing, Santa Monica, CA,
Ives, B., Olson, M.H. and Baroudi, J.J. "The Meas- 1979, pp. 22-38.

164 MIS Quarterly/June 1989

This content downloaded from


136.233.110.66 on Sun, 24 Mar 2024 19:28:01 +00:00
All use subject to https://about.jstor.org/terms
Validating Research Instruments

Marsh, H.W. and Hocevar, D. "Confirmatory Ricketts, J.A. and Jenkins, A.M. "The Develop-
Factor Analysis of Multitrait-Multimethod Ma- ment of an MIS Satisfaction Questionnaire: An
trices," Journal of Educational Measurement Instrument for Evaluating User Satisfaction
(20:3), Fall 1983, pp. 231-248. with Turnkey Decision Support Systems," Dis-
McFarlan, F.W. (ed.). The Information Systems cussion Paper No. 296, Indiana University
Research Challenge, Harvard Business School of Business, Bloomington, IN, 1985.
School Press, Boston, MA, 1984. Rivard, S. and Huff, S.L. "Factors of Success
McFarlan, F.W. "Editor's Comment," MIS Quar- for End-User Computing," Communications of
terly (10:1), March 1986, pp. i-ii. the ACM (31:5), May 1988, pp. 552-561.
Sharma, S., Durand, R.M. and Gur-Arie, O. "Iden-
McGrath, J. "Toward a 'Theory of Method' for
tification and Analysis of Moderator Variables,"
Research on Organizations," in. Research in
Organizations: Issues and Controversies, R.To Journal of Marketing Research (18), August
1981, pp. 291-300.
Mowday and R.M. Steers (eds.), Goodyear
Skogan, W.G. Issues in the Measurement of Vic-
Publishing, Santa Monica, CA, 1979, pp. 4-
21. timization, NCJ-74682, U.S. Department of Jus-
tice, Bureau of Justice Statistics, Washington,
Mitchell, T.R. "An Evaluation of the Validity of D.C., 1981.
Correlational Research Conducted in Organi-
Sprague, R.H., Jr. and McNurlin, B.C. (eds.), In-
zations," Academy of Management Review
formation Systems Management in Practice,
(10:2), 1985, pp. 192-205.
Prentice-Hall, Englewood Cliffs, NJ, 1986.
Nagin, D. "General Deterrence: A Review of the Straub, D.W. and Widom, C.S. "Deviancy by Bits
Empirical Evidence," in Deterrence and Inca- and Bytes: Computer Abusers and Control
pacitation: Estimating the Effects of Criminal Measures," in Computer Security: A Global
Sanctions on Crime Rates, A. Blumstein, J. Challenge, J.H. Finch and E.G. Dougall (eds.),
Cohen, and D. Nagin (eds.), National Acad- Elsevier Science Publishers B.V. (North-
emy of Sciences, Washington, D.C., 1978. Holland) and IFIP, Amsterdam, 1984, pp. 91-
Nunnally, J.C. Psychometric Theory, McGraw- 102.
Hill, New York, NY, 1967. Swanson, E.B. "Management Information Sys-
O'Reilly, C.A., III. "Variations in Decision Makers' tems: Appreciation and Involvement," Manage-
Use of Information Sources: The Impact of ment Science (21:2), October 1974, pp. 178-
Quality and Accessibility of Information," Acad- 188.
emy of Management Journal (25:4), 1982, pp. Swanson, E.B. "A Note on Information Attrib-
756-771. utes," Joural of MIS (2:3), 1986, pp. 87-91.
Parker, D.B. Computer Security Management, Swanson, E.B. "Information Channel Disposition
Reston Publications, Reston, VA, 1981. and Use," Decision Sciences (18:1), 1987, pp.
Poole, M.S. and DeSanctis, G.L. "Group Deci- 131-145.
sion Making and Group Decision Support Sys- Tait, Po and Vessey, I. "The Effect of User In-
tems: A 3-year Plan for the GDSS Research volvement on System Success," MIS Quar-
Project," working paper, MISRC-WP-88-02 terly (12:1), March 1988, pp. 91-110.
Management Information Systems Research Vessey, I. and Weber, R. "Research on Struc-
Center, University of Minnesota, Minneapolis, tured Programming: An Empiricist's Evalu-
MN, 1987. ation," IEEE Transactions on Software Engi-
Popper, K. The Logic of Scientific Discovery, neering (SE-10:4), July 1984, pp. 397-407.
Harper Torchbooks, New York, NY, 1968. Yaverbaum, G.J. "Critical Factors in the User
(First German Edition, Logik der Forschung, Environment: An Experimental Study of
1935.) Users," MIS Quarterly (12:1), March 1988, pp.
Reichardt, C. "Statistical Analysis of Data from 75-90.
Nonequivalent Group Designs," in Quasi- Zmud, R.W. "An Empirical Investigation of the
Experimentation, D. Cook and D.T. Campbell Dimensionality of the Concept of Information,"
(eds.), Houghton-Mifflin, New York, NY, 1979. Decision Sciences (9:2), 1978, pp. 187-195.

MIS Quarterly/June 1989 165

This content downloaded from


136.233.110.66 on Sun, 24 Mar 2024 19:28:01 +00:00
All use subject to https://about.jstor.org/terms
Validating Research Instruments

About the Author ies in the computer security management arena,


but his research interests also extend into infor-
Detmar W. Straub is assistant professor of man- mation technology forecasting, measurement of
agement information systems at the University key IS concepts, and innovation and diffusion
of Minnesota where he teaches courses in sys- theory testing. His professional associations and
tems and pursues research at the Curtis Lo responsibilities include: associate director, MIS
Carlson School of Management. He joined the Research Center, University of Minnesota; as-
Minnesota faculty in September, 1986 after com- sociate publisher of the MIS Quarterly; editorial
pleting a dissertation at Indiana University. Pro- board memberships; and consulting with the de-
fessor Straub has published a number of stud- fense and transportation industries.

166 MIS Quarterly/June 1989

This content downloaded from


136.233.110.66 on Sun, 24 Mar 2024 19:28:01 +00:00
All use subject to https://about.jstor.org/terms
Validating Research Instruments

APPENDIX
Section I.
Computer Abuse Questionnaire

3. NUMBER OF TOTAL YEARS EXPERIENCE IN/WITH


Personal Information INFORMATION SYSTEMS?

O More than 14 years


1. YOUR POSITION: O 11 to 14 years
O 7 to 10 years
O President/Owner/Director/Chairman/Partner O 3 to 6 years
O Vice President/General Manager O Less than 3 years
D Vice President of EDP O Not sure
O Director/Manager/Head/Chief of EDP/MIS
O Director/Manager of Programming Organizational Information
O Director/Manager of Systems & Procedures
O Director/Manager of Communications 4. Approximate ASSETS and annual REVENUES of your
O Director/Manager of EDP Operations organization:
O Director/Manager of Data Administration
ASSETS REVENUES
O Director/Manager of Personal Computers At all At this At all At this
O Director/Manager of Information Center Locations Location Locations Location
O Data Administrator or Data Base Administrator
O Data/Computer Security Officer O ..... ..Over 5 Billion....... O D
O Senior Systems Analyst O ...... 1 Billion-5 Billion ..... o a
O .....250 Million-1 Billion.... l o
[ Systems/Information Analyst O O ... 100 Million-250 Million. l o
O Chief/Lead/Senior Applications Programmer O O ... 50 Million-100 Million l o
o Applications Programmer O O .... 10 Million-50 Million .... l E
O Chief/Lead/Senior Systems Programmer O O ..... 5 Million-10 Million .... o o
O Systems Programmer O O ..... 2 Million-5 Million ..... o o
O Chief/Lead/Senior Operator O ..... 1 Million-2 Million ..... o E
O Machine or Computer Operator O ...... Under 1 Million ...... l o
O E .........Not sure ......... E o
O Vice President of Finance
O Controller
5. NUMBER OF EMPLOYEES of
D Director/Manager Internal Auditing or EDP Auditing
At all At this
O Director/Manager of Plant/Building Security Locations Location
0 EDPAuditor
O Internal Auditor 10,000 or more .... .............. . D O
O Consultant 5,000-9,999 ....................... 0 0
2,500-4,999 ....................... O
O Educator
1,000-2,499....................... O
O User of EDP 750-999 .......................... O
O Other (please specify): 500-749 .......................... El O
250-499.......................... E O
100-249.......................... lO
6-99 ...................... . D
Fewer than 6 .............. . O a
2. YOUR IMMEDIATE SUPERVISOR'S POSITION: Not sure ......................... E D

E President/Owner/Director/Chairman/Partner
O Vice President/General Manager 6. PRIMARY END PRODUCT OR SERVIC
E Vice President of EDP this location:
O Director/Manager/Head/Chief of EDP/MIS E Manufacturing and Processing
E Director/Manager of Programming E Chemical or Pharmaceutical
E Director/Manager of Systems & Procedures E Government: Federal, State, Municipal including Military
O Director/Manager of Communications E Educational: Colleges, Universities, and other
E Director/Manager of EDP Operations. Educational Institutions
E Director/Manager of Data Administration D Computer and Data Processing Services including
E Director/Manager of Personal Computers Software Services, Service Bureaus, Time-Sharing
E Director/Manager of Information Center and Consultants
E Data/Computer Security Officer E Finance: Banking, Insurance, Real Estate, Securities,
E Senior Systems Analyst and Credit
E Chief/Lead/Senior Applications Programmer E Trade: Wholesale and Retail
E Chief/Lead/Senior Systems Programmer O Medical and Legal Services
E Chief/Lead/Senior Machine or Computer Operator E Petroleum
E Transportation Services: Land, Sea, and Air
E Vice President of Finance E Utilities: Communications, Electric, Gas, and
E Controller
Sanitary Services
O Director/Manager Internal Auditing or EDP Auditing E Construction, Mining, and Agriculture
O Director/Manager of Plant/Building Security O Other (please specify):
E Other (please specify): Are you located at Corporate Headquarters: Yes ] No E

MIS Quarterly/June 1989 167

This content downloaded from


136.233.110.66 on Sun, 24 Mar 2024 19:28:01 +00:00
All use subject to https://about.jstor.org/terms
Validating Research Instruments

7. CITY (at this location)? __ STATE? 17. Other than those security software systems you listed in
question 1E, how many SPECIALIZED SECURITY SOFTWARE
8. TOTAL NUMBER OF EDP (Electronic Data Processing)
SYSTEMS are actively in use? (Examples: ACFII, RACF)
EMPLOYEES at this location (excluding
data input personnel):
(number of specialized security software systems actively in use)
O More than 300 D 50-99
O 250-300 10-49 Of these, how many were purchased from a vendor?
O 200-249 O Fewer than 10 (number purchased from a vendor)
O 150-199 O Not sure
... and how many were developed in-house?
n 100-149 (number developed in-house)

9. Approximate EDP BUDGET per year of your organization 18. Throughwhat INFORMATIONALSOURCESarecom
at this location: users made aware OF THE APPROPRIATE AND INAPPRO-
O Over $20 Million O $2-$4 Million PRIATE USES OF THE COMPUTER SYSTEM?
D $10-$20 Million C $1-$2 Million
O $8-$10 Million O Under $1 Million (Choose as manyasapplicable)
O $6-$8 Million O Notsurer $1 M Distributed EDP Guidelines
O $4-$6 Million Nt s Administrative program to classify information by sensitivity
O Periodic departmental memos and notes
Computer Security Internal Audit O Distributed statements of professional ethics
Computer Security, Internal Audit, O Computer Security Violations Reports
and Abuse Incident Information E Organizational meetings
EO Computer Security Awareness Training sessions
d
A Computer Security function in an organization is any pur- Informal discussions
poseful activity that has the objective of protecting assets such O Other (please specify):
as naraware, programs, data, ana computer service Trom loss or
19. Which types of DISCIPLINARY ACTION do th
misuse. Examples of personnel engaged in computer security
mational sources mention (question 18) as c
functions include: data security and systems assurance officers.
of purposeful computer abuse?
For this questionnaire, computer security and EDP audit func-
tions will be considered separately. (Choose as many as applicable)
Computer EDP O Reprimand
Security Audit O Probation or suspension
10. How many staff members are E Firing
working 20 hours per week or more O Criminal prosecution
in these functions at this location? __(number - (number O Civil prosecution
of persons) of persons)
O Other (please specify):
11. How many staff members are
working 19 hours per week or less in In questions 20-24, please indicate your reactions to the fol-
these functions at this location? _ (number (number lowing statements: .trnlu .. t o
of persons) of persons) btroingly iNvt olUisly
Agree Agree Sure Disagree Disagree
12 What are the total personnel hours 20. The current computer security
per week dedicated to these effort was in reaction in large
part to actual or suspected past
functions? __(total __ (total
incidents of computer abuse at
hours/wk) hours/wk) this location. O E a o E
13. When were these functions 21. The activities of computer
initiated? security administrators are well
(month/yr) (month/yr) known to users at this location. O 0D o D

22. The presence and activities of


computer security administra-
If your answer to the Computer Security part of question 12 was
tors deter anyone who might
zero, please go directly to question 25. Otherwise, continue. abuse the computer system at
this location. o D D O D

14. Of these total computer security personnel hours per 23. Relative
week to our type of industry
computer security is very
(question 12), how many are dedicated to each of the effective at this location. oD D o 0
following? 24. The overall security philosophy
A. Physical security administration, disaster at this location is to provide
recovery, and contingency planning .... (hours/week) very tight security without
hindering productivity. o 0 0 D0
B. Data security administration .......... (hours/week)
C. User and coordinator training ......... (hours/week) 25. How many SE
D. Other .......................... (hours/week) INCIDENTS OF C
(please specify): this location expe
Jan. 1, 1986?
15. EXPENDITURES per year for computer security at this (number of incidents)
location:
(Please fill out a separate "Computer Abuse Incident Report"
Annual computer security personnel salaries ..... $ -
[Blue-colored Section II] for each incident.)
Do you have insurance (separate policy or rider)
specifically for computer security losses? 26. How many incidents do you have reason to suspect other
O Yes O No O Not sure
than those numbered above in this same 3 year period, Jan.
If yes, what is the annual cost of such insurance ... $. 1, 1983-Jan. 1, 1986?
_ (number of suspected incidents)
16. SECURITY SOFTWARE SYSTEMS available and actively in use
on the mainframe(s) [or minicomputer(s)] at this location:27. Please briefly describe the basis (bases) for these
Number of Number of suspicions.
available systems
systems? in use?
Operating system access control facilities...
DBMS security access control facilities .....
Fourth Generation software access
control facilities ....... .. .........

168 MIS Quarterly/June 1989

This content downloaded from


136.233.110.66 on Sun, 24 Mar 2024 19:28:01 +00:00
All use subject to https://about.jstor.org/terms
[]
Validating Research Instruments

Section II.
Computer Abuse Incident Report
(covering the 3 year period, Jan. 1, 1983-Jan. 1, 1986)

Instructions: Please fill out a separate report for each incident of computer abuse that has occurred in the 3 year period,
Jan. 1, 1983-Jan. 1, 1986
28. WHEN WAS THIS INCIDENT DISCOVERED? 36.Iftheincdenthadbengoingonforaperiod ftime
howlongwastha?
Month/year . /_ - years -months
29. HOW MANY PEOPLE WERE INVOLVED in committing the
computer abuse in this incident? 37. In your judgment, how serious a breach of s
this incident?
(number of perpetrators) (Choose one only)
30. POSITION(S) OF OFFENDER(S): Main O Extremely serious
Main Second O Serious
Offender Offender O Of minimal importance
Top executive . ...... ......... . 0 0 O Not sure
Security officer ..................... 0 0
O Of negligible importance
Auditor . ..... ..... ...... ....... 0 0
Controller ........ .......... . 0 0
38. Estimated $ LOSS through LOST OPPORTUNITIES (if
Manager, supervisor ................. 0 0
measurable): (Example: $3,000 in lost business
Systems Programmer ............... 0 0
because of data corruption)
Data entry staff .................. . 0
Applications Programmer . ....... .... 0 $
Systems analyst ..... .............. 0 (estimated $ loss through lost opportunities)
Machine or computer operator ......... C 0
Other EDP staff . ................... 0 0 39. Estimated $ LOSS through THEFT and/or RECOVERY
Accountant ........ .... o.... .. 0 0 COSTS from abuse: (Example: $12,000 electronically
Clerical personnel................... 0 0 embezzled plus $1,000 in salary to recover from
Student .......................... 0 0
data corruption + $2,000 in legal fees = $15,000)
Consultant .................. ..... 0 0
$
Not sure ... ,..................... 0 0
(estimated $ loss through theft and/or recovery costs)
Other ............. ...... ...... 0

(please specify): (Main) 40. This incident was discovered...

(Second)- (Choose as many as applicable)


O by accident by a system user
31. STATUS(ES)OFOFFENDER(S) O by accident by a systems staff member or an
internal/EDP auditor
when incident occurred: Main Second
OffenderOffender
O through a computer security investigation other
than an audit
Employee ......................... 0
O by an internal/EDPaudit
Ex-employee ....................... 0
O through normal systems controls, like software or
Non-employee ........ .......... 0 0
Not sure ......................... 0
procedural controls
0
O by an external audit
Other ............... .. ....... 0 0
O not sure
(please specify): (Main) O other (please specify):
(Second).

32. MOTIVATION(S) OF OFFENDER(S): Main Second


Second 41. This incident was reported to....
Offender Offender (Choose as many as applicable)
Ignorance of proper professional conduct... 0 0 O someone inside the local organization
Personal gain ...................... 0 0 O someone outside the local organization
Misguided playfulness . .............. 0 0 O not sure
Maliciousness or revenge ............. 0 0
Not sure .......................... 42. If this incident was reported to someone outside the
Other ............ ........... 0 local organization, who was that?
(please specify): (Main) (Choose as many as applicable)
O someone at divisional or corporate headquarters
(Second). O the media
O the police
33. MAJOR ASSET AFFECTED or involved: O other authorities
(Choose as many as applicable) O not sure
O Unauthorized use of computer service
O Disruption of computer service 43. Please briefly describe the incident and what finally
O Data
0 Hardware happened to the perpetrator(s).
O Programs

34. Was this a one-time incident or had it been going on for a


period of time?
(Choose one only)
O one-time event
O going on for a period of time
O not sure

35. If a one-time incident, WHEN DID IT OCCUR?


Month__ Year

MIS Quarterly/June 198

This content downloaded from


136.233.110.66 on Sun, 24 Mar 2024 19:28:01 +00:00
All use subject to https://about.jstor.org/terms

You might also like