Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

Evaluating Management Information Systems

Author(s): William R. King and Jaime I. Rodriguez


Source: MIS Quarterly, Vol. 2, No. 3 (Sep., 1978), pp. 43-51
Published by: Management Information Systems Research Center, University of Minnesota
Stable URL: http://www.jstor.org/stable/249177
Accessed: 01-10-2015 03:31 UTC

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at http://www.jstor.org/page/
info/about/policies/terms.jsp
JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content
in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship.
For more information about JSTOR, please contact support@jstor.org.

Management Information Systems Research Center, University of Minnesota is collaborating with JSTOR to digitize,
preserve and extend access to MIS Quarterly.

http://www.jstor.org

This content downloaded from 200.21.227.154 on Thu, 01 Oct 2015 03:31:42 UTC
All use subject to JSTOR Terms and Conditions

Evaluating MIS

Evaluating
Management
InformationSystems
By: William R. King and
Jaime 1. Rodriguez

There is no dearth of commentary and literature


on the evaluation of information systems. Both
the literature of the MIS field and the day-to-day
pronouncements of managers are replete with
evaluations, many of which are negative, of
systems that have been developed to aid
managers in performing their jobs.
There is, however, a real dearth of scientific
literature involving the systematic evaluation of
information systems. Most evaluations of information systems are provided only in efficiencyoriented terms on a post hoc basis by system
users. Such evaluations, pro or con, sufferfrom a
variety of difficiencies: the lack of prior assessments that would permit "pre-versus-post"
system comparisons; the possibility that the
systems are being judged in the light of expectations and objectives that go beyond those that
might reasonably have been established prior to
their development; and an emphasis on
efficiency-oriented rather than effectivenessoriented assessments.
In this article, we seek to develop a conceptual
process through which information systems may
be evaluated on a systematic basis. We then
demonstrate the potential feasibility of this
process by applying it to the evaluation of an
innovative information system.

An MIS Evaluation Model


Figure 1 represents a theoretical MIS evaluation
process model which shows assessments being
made prior to the design of an MIS, during the
various phases of development, and subsequent
to system implementation. These assessments
are made in terms of attitude, value preceptions,
information usage, and decision performance.

Abstract
Management informationsystems are not generally
evaluated in a systematic fashion. This article
provides an MIS evaluation framework and
describes its application to a strategic planning
information system.

Keywords:
Categories:

System evaluation
4.6

The simple MIS development process which is


represented in the horizontal flow of Figure 1 is
meant to be illustrative only. There are many
more detailed specifications of the phases,
which may be followed in designing, developing,
and implementing an MIS [4]. The process
shown in Figure 1 is meant only to show that
assessments
can be made prior to the
commencement of the MIS effort, at various
intervals during the phases as prescribed, and
subsequent to the implementation ofthesystem,
possibly in a post audit phase.

MIS Quarterly / September 1978 43

This content downloaded from 200.21.227.154 on Thu, 01 Oct 2015 03:31:42 UTC
All use subject to JSTOR Terms and Conditions

4C,
rb

c,)
0Q

ASSESSMENTS OF:
ATTITUDES
VALUE PERCEPTIONS
~~~~~~~~-

Cb

o>

co

INFORMATION
~~~~~~~~-

-It

<?^~~

N~

------ -V

- - -.1
\l

USAGE

VALU~~~DECISION
PERFORMANCE

-------

\I

Figure 1. MIS Evaluation - Conceptual Model

This content downloaded from 200.21.227.154 on Thu, 01 Oct 2015 03:31:42 UTC
All use subject to JSTOR Terms and Conditions

Evaluating MIS

Assessment in the evaluation


process
The overall MIS evaluation process in Figure 1
involves assessments which fall into four general
categories: attitudes, value perceptions, information usage, and decision performance.
Attitudes and Value Perceptions --Attitudes and
value perceptions are an important and oftenneglected aspect of MIS evaluation. While most
informal, post-implementation assessments that
are made of information systems are clearly
attitudinal in nature, most systems are developed
without formal assessmerts of the attitudes of
the users and the organization's managers.
Value perceptions are pragmatically distinguished from attitudes in this article as being
more direct assessments related to the specific
MIS. For instance, an answer to a question such
as "How good is the system?" is a value perception. An attitude is a more basic entity dealing
with an individual's intrinsic beliefs and outlooks
on the world [11]
Decision Performance - Decision performance
assessments reflect the quality of the decisionmaking process that is supported by the MIS.
Since most authors agree that a major objective
of a true MiS -- as opposed tc a data processing
system - is decision support [2, 5], it is not
unreasonable to expect that an MIS will have
demonstrable impact on the quality of decisionmaking.
This is such an important attribute of MIS
performance evaluation that a new term,
"management decision support system" [9], has
been created to clearly distinguish systems-that
have the decision support objective from those
that do not. However, few systems are justified in
decision support terms, emphasizing increased
effectiveness rather than efficiency, and few are
formally evaluated in terms of the degree to
which they enhance decision performance.
Information Usage - An area of system value
which is related to decision performance is that
of information acquisition and usage. Even if a
not impact on decision
does
system
performance in readily measurable ways, it
reasonably may be expected to affect user's
information acquisition and usage behavior in
identifiable ways. Since a user of a new system,
through his usage, has altered his behavior, such
an assessment must be made in more substan-

tive terms. For instance, the assessment may be


made in terms of whether the system has motivated the user to assess his choice situation more
systematically, to make greater use of relevant
information, or to use a decision model in a situation where he may previously have made an
intuitive choice.

The evaluation process


These four broad areas of value assessments attitudes, value perceptions, information usage,
and decision performance - may be measured
at various stages of the design and development
process to evaluate the MIS. They compared in
two general ways:
1. Have attitudes, value perceptions, information usage behavior, and decision
performance of users changed from the presystem time period, through design and
development, to the post-system period?
2. Has user behavior changed more than that
of the non-user or has the user benefited
more than the non-user?
Temporal Changes - All of these possible
comparisons undoubtedly would not be feasible
in any single MIS development situation.
such comparisons
However,
might be
considered as possible approaches to system
evaluation.
Temporal assessments, in many cases, will be
made only on a "pre-post" basis using measurements made before system design and after
system implementation. However, if such
assessments are to be useful in changing the
nature of a system or the design-development
process for a system that is under development,
made during the designassessments
development process will be necessary.
Pre-design assessments of user attitudes can, in
isolation, be useful to system designers. For
instance, a pre-design attitude assessment that
shows clear negative managerial attitudes
toward sophisticated management systems
might lead to a revised system concept. This
could mean greater inclusion of managers in the
design-development process, or "sales" efforts
that have the objective of achieving greater
appreciation for the system by its potential usermanagers.
User Benefits - Evaluations of the second
variety, which assess the behavior changes or

MIS Quarterly / September i978

This content downloaded from 200.21.227.154 on Thu, 01 Oct 2015 03:31:42 UTC
All use subject to JSTOR Terms and Conditions

45

Evaluating MIS
benefits accruing to system users relative to
non-users, are essential to ensuring that
changes or benefits are not attributed to an MIS
when they are really due to some other influence.
For instance, a general positive shift in attitudes
toward sophisticated computer systems might
be caused by publicity concerning the effectiveness of new computers. If such an assessment
were made for system users without comparison
with non-users, the shift might be attributed
to the system when it is, in fact, an overall
attitudinal change in all managers, users and
non-users alike.

The MIS Evaluation


Demonstration Study
The theoretical evaluation model of Figure 1 has
been applied in a system development context.
As with most practical applications of the
conceptual framework of Figure 1, the measurements which have been made were selected on
the basis of feasibility, cost, and relevance to the
objectives of the system. Therefore, not all
assessments that are described in Figure 1 were
made at each phase of the development.

The MIS being evaluated


The system is one which was designed to aid
managers in understanding and resolving
competitor-related strategic issues. It is referred
to as a "Strategic Issue Competitive Information
System" (SICIS). (See [7] for a more detailed
description of the system.)
The SICIS system is a combination of "intelligent
MIS" as described by King [3] and a management decision support system as described by
Scott Morton [9]. It was designed to support the
unstructured strategic and policy planning
activities of an organization by providing the
user with problem-related information that may
go beyond that which he might otherwise identify as necessary for him to make a strategic
choice. The system facilitates competitive
analysis and the identification of opportunities in
the competitive environment for managers who
are involved in strategic planning activities.
A key feature is that it can be used by managers
who may not be trained either in competitive
analysis or in the use of sophisticated computer

46

systems. If a system can be developed that is


perceived to be useful and valuable by managers
who may be untrained in these areas, it can serve
as a valuable tool in supporting a strategic
planning process.
The SICIS system utilizes "strategic issues" that
represent strategic problem-related questions
that the user can use to access competitive information in the SICIS database. The strategic
issues are modelled within the system so that
each user query concerning an issue evokes an
information structure model that defines, in
hierarchical form, the information that is relevant
to the query.
For instance, a manager using the system might
make a query such as, "What is the capability of
Competitor X to introduce a new product in his
Line Y next year?" The manager would be
which
presented with system responses
successively identify sub-classes of data determined to be critical to the resolution of this issue.
For instance, such a query might produce a
system response that would identify Competitor
X's financial capability, production capability,
and technological
marketing capability,
capability, as elements of the overall issue.
The user could then identify specific areas of
interest, or he could request the total picture
related to his initial question. A user indication of
interest in the competitor's marketing capability
would produce a system response indicating the
availability of data on competitive distribution
channel capacity, field sales capability, service
capability, technical sales expertise, and a
variety of other marketing-related areas. The
system might also indicate its ability to provide
projections of future market growth. Indications
of interest in other areas would produce similar
that would indicate
responses
system
successively, after each new user response,
more detailed sets of available data.
Thus, the SICIS system is much like a competitive database system in terms of the competitive information that it makes available to the
user. On the other hand, the system is analogous
to "corporate-model" systems in its utilization of
information structure models and its capabilities
for allowing the user to inquire in terms of
issues
[6]. The limited
problem-related
"intelligence" of the system stems from its use
of information structure models to define relevant subclasses of data and to suggest cate-

MIS Quarterly / September 1978

This content downloaded from 200.21.227.154 on Thu, 01 Oct 2015 03:31:42 UTC
All use subject to JSTOR Terms and Conditions

Evaluating MIS

gories of information to the user that are related


to his broadly-defined issue inquiry, but which
he may not himself be able to identify.

The evaluation measures


The evaluation of SICIS utilized specific assessments in each of the four general areas. Assessments of attitudes and value perceptions were
made using an instrument developed by Schultz
and Slevin [8]. This instrument entails both items
that can be related to general attitudes and
specific indications of the perceived value df the
system. Table 1 describes both varieties of these
measures.
Information usage behavior was assessed in
terms of the amount of use which was made of
the system and the substantive nature of the
usage. In the case of SICIS, the amount of use is
measured by the number of queries that users
make when they are faced with strategic issues
on decisions for which the system can be of help.
The substantive nature of system usage is
assessed in terms of the correlation between the
issues that are addressed during system usage
and the inputs that are provided by managers
during system design and development. Usermanagers were asked early in the participative
design-development process to specify what
issues would be most useful if incorporated into

Table 1.
ATTITUDINALFACTORS
F1 Performance:
effect
F2 F3 F4 F5 -

Schultz-Slevin

of system on
manager's job performance and performance visibility.
Interpersonal: interpersonal relations,
communications, and increased interaction and consultation with others.
Changes: changes will occur in organizational structure and people dealt with.
Goals: goals will be more clear, more
congruent to workers, and more achieveable.
Support/resistance: system has implementation support, adequate top management, technical and organizational
support, and does not have undue resistance.

the system. Their inquiries into these issues


using the system were frequently assessed and
compared to their inputs. Such a matching provides an assessment of whether managerial, or
user, inputs serve substantive purposes. An
alternative hypothesis might be that managerial
involvement in MIS design serves only psychological purposes in reducing anxiety and facilitating change.
Decision performance was assessed in terms of
managerial performance on a series of specific
competitive-oriented strategic choice problems
that were prepared and responded to by participating managers. Their answers to these questions were evaluated by professors of business
policy who served as independent, objective
evaluators. They "graded" both the substance
and the thought processes that were used to
justify managerial responses.
Using these measures, each of the four general
varieties of assessment were done for SICIS.
Of course, as in any real MIS development
effort, the situation did not permit all of the
logically-possible temporal comparisons to be
made. Neither did it permit the decision performance assessments to be made in terms of a
sequence of real-world choices made over a
period of time. However, those assessments that
were feasible serve as a basis for a system
evaluation which goes well beyond those that
are usually made.

Questionnaire Factors
F6 -

researcher underClient/researcher:
stands management problems and works
well with the clients.
F7 Urgency: need for results, even with
costs involved, importance to me, boss,
top management.
MEASURES OF VALUE PERCEPTIONS OF
SYSTEM USERS
D1 - Probability that you will use the system.
D2 - Probability that other managers will
use the system.
D3 - Probability that the system will be a
success.

D4 D5 -

Managers evaluation of the worth of


the system.
The level of accuracy you expect from
the system.

MIS Quarterly / September 1978 47

This content downloaded from 200.21.227.154 on Thu, 01 Oct 2015 03:31:42 UTC
All use subject to JSTOR Terms and Conditions

Evaluating MIS

The evaluation context


The MIS evaluation demonstration study was
conducted for the SICIS system using 45 experienced manager-users in a simulated business
environment. The managers were enrolled in a
part-time MBA program and all had completed
virtually all of the program requirements. The
simulation was made a part of a "capstone"
course, "Integrated Decision Making," which
utilizes business policy cases as a primary
teaching vehicle.
The class had completed six class hours plus
preparation time on a series of cases from the
appliance industry before they were involved in
the SICIS exercise. One group of subjects participated in the system design and then used the
system. A second group that did not participate
in the design also used the system. A third
"control" group did neither.
The subjects were all experienced managers
who were assigned to the three groups on a
randomized blocking basis [1]. This assignment
ensured that the groups were alike with
respect to their proportional composition of
managers with much, some, or little experience
in strategic competitive analysis - the focal
point for usage of the SICIS system.
The managers were not told that they were
involved in an experimental evaluation of the
system until the process was complete. Rather,
the exercise was an integral part of the course
which the managers perceived to require
unusual scheduling in order for them to gain
adequate access to the computer terminals on
which they used the SICIS system. After the
experiment was completed and all assessments
were made, the control group, which had neither

Table 2.

48

Evaluation results
The results of the demonstration evaluation were
expressed in terms of a series of "hypotheses"
that were tested using the operational measures
of attitudes, perceived value, information usage,
and decision
as described
performance
previously.
Attitudinal Hypothesis - The attitudinal hypothesis related to the positive change in attitudes
which might be expected from system users. The
change was measured on a pre/post basis. The
"pre" measurement was taken after the managers had only a brief introduction to the objectives and nature of the SICIS system. The "post"
measurement was made after they had an opportunity to make use of the system in responding to
a set of business policy questions that were
posed to them.
The attitudinal hypothesis, stated in null form,
was that attitudes toward the system would not
change favorably after use of the system in comparison to pre-use attitudes. Two groups of
managers, one composed of those who had used
the system and one of those who had not, were
administered the Schultz-Slevin [8] instrument
on a pre/post system usage basis. The control
group's attitudes did not change significantly in
terms of the "gain scores" (pre versus post
measurements) on any of the seven SchultzSlevin attitudinal factors (F1-F7 in Table 1).

Attitudinal Factor Hypothesis Tests

Experimental Group
User Average
(Standard Deviation)
F1
F2
F3
F4
F5
F6
F7

participated in the design nor used the system


prior to that point, was permitted to use the
system. This ensured that all groups were ultimately treated alike and that the educational
objectives of the course were realized.

(Performance)
(Interpersonal)
(Changes)
(Goals)
(Support/Resist)
(Client/Researcher)
(Urgency)

.313
.865
.166
.623
.260
2.40
.433

(.193)
(.60)
(.454)
(.420)
(.438)
(.303)
(.312)

Control Group
Non-users
(Standard Deviation)
.035
1.15
-.023
.447
.315
2.21
.448

(.345)
(.556)
(.886)
(.30)
(.277)
(.551)
(.305)

MIS Quarterly / September 1978

This content downloaded from 200.21.227.154 on Thu, 01 Oct 2015 03:31:42 UTC
All use subject to JSTOR Terms and Conditions

Level of
Significance
.0189
.3290
.4159
.1159
.2902
.0567
.3997

Evaluating MIS
However, the "experimental" group, which was
composed of those who had used the system,
had significant positive attitudinal change in the
"performance" factor (F1) and the "client
researcher" factor (F6) (see Table 1). The other
five attitudinal factors did not show a statistically
significant difference in gain scores for the
experimental group. Table 2 shows the group
means, standard deviations, and significant
levels for the two groups in terms of the gain
scores for the seven factors.
Perceived Value Hypothesis - A hypothesis of
perceived value was addressed using the
Schultz-Slevin [8] "dependent variables" in
Table 1 (D1-D5). Again, gain scores for users
and non-users were compared to test the hypothesis, in null form, that system users' value
perceptions would not change relative to the
value perceptions of non-users.
Table 3 shows the means, standard deviations,
and significance levels that reflect significant
differences in the "use" (D1) and "worth" (D4)
variables with no significance being attached to
the differences in changes in the other three
variables (D2, D3 and D5). This implies that users
perceive the value of the system to be greater in
terms of their own likelihood of using the system
and in terms of an overall assessment of its
worth than do non-users, but the same difference
between user's and non-user's perceptions does
not hold in terms of the usage of other managers,
system success, and accuracy.'

'Note that the Schultz-Slevin instrument was initially


developed for use in evaluating a forecasting model. The
"accuracy" variable has much more relevance there than
with SICIS. Because the authors wished to use an established and validated instrument, it was used despite the
limited relevance of a few of its items and factors to SICIS.

Table 3.

Use
Others
Success
Worth
Accuracy

One group of managers participated in system


design by critically evaluating a list of strategic
issues that the system might be designed to
address, by suggesting other issues which might
be incorporated into it, and by speculating on the
relevant information structure models that might
be built into the system for each issue. Another
group had no such participation opportunity.
Both groups used the system as an aid in
responding to a series of business policy questions. The two groups were compared with
respect to the amount of use, i.e., number of
inquiries, that they made of the system. This was
measured using protocols that were generated
by each user from the interactive computer
system on which the system was implemented.
The two groups, participants and nonparticipants, showed no significant difference
with respect to the amount of system usage in
this study. Participants made an average of 7.73
inquiries while non-participants made an
average of 7.29 inquiries - an insignificant
difference.
Clearly this does not suggest that participation
influenced the willingness to use the system, as
measured or in terms of the amount of usage.
However, this result might be viewed as a
positive evaluation of the system since it implies
that less knowledgeable managers, the nonparticipants, found the system to be as easy to
use as did those managers who had participated
in its development. Such a characteristic is a

Perceived Value Hypothesis Tests

Experimental Group
User Averages
(Standard Deviation)
D1
D2
D3
D4
D5

Information Usage Hypothesis - The hypothesis dealing with the amount of information
usage involved a comparison of design participants and non-participants, rather than, as in the
previous hypotheses, of system users versus
non-users.. No pre-versus-post information
usage evaluation was performed.

.064
.007
.050
.014
.050

(.333)
(.237)
(.176)
(.092)
(.105)

Control Group
Non-user Average
(Standard Deviation)
-.07
-.01
-.014
-.07
.107

iml~~~~~~~~~~~~~~~~~~~~~~~~~~

(.300)
(.246)
(.23)
(.170)
(.175)

Significance
Level
.078
.363
.242
.109
.205

MIS Quarterly / September 1978 49

This content downloaded from 200.21.227.154 on Thu, 01 Oct 2015 03:31:42 UTC
All use subject to JSTOR Terms and Conditions

Evaluating MIS

desirable one for a system and the SICIS variety,


especially since it is intended for use by managers who are not trained in either computer
science or competitive analysis.
A second hypothesis related to information
usage concerns the degree to which the substantive input of managers, as provided during
the design-development process, is related to
the substance of actual system use. In other
words, do managers actually use the system for
obtaining information on those topics which they
felt to be important during the system design
phase? Presumably, a correlation between substantive input and usage would reflect well on
both the design process and the system.
As a part of the design process, participants
were asked to rate each issue on a five-part,
Likert-type scale in terms of its relevance to
the simulated business context. Subsequently,
after the system had been developed and the
design participants had used the system, their
protocols were analyzed to ascertain which
issues had been inquired into by them in
responding to the business policy questions.
The (null) hypothesis that the inputs provided in
the design process would not be reflected in
system usage was rejected using a Kendall Rank
Correlation test. This test assesses the similarity
in rank between the values placed on the issues
during the design process and the rank in terms
of the frequency with which these issues were
actually requested by system users. The Kendall
coefficient of correlation, Tau, was 0.6277 giving
a significance level of 0.0016. Thus, the null
hypothesis was rejected and it was concluded
that there is good reason to believe that the two
rankings are highly related.
This finding serves to validate both a participative design process and the system that
emanates from it in terms of managers actually
"knowing what they wish to get from an MIS."
As such, it is an important criterion which can
be used for evaluating SICIS.
The
Decision Performance Hypothesis "bottom line" assessment of an MIS, if it is truly
one which is to support managerial decisionmaking, is improved decision performance. Such
systems ultimately are intended, along with other
objectives, to lead to improved decision-making.

System users were compared with non-users in


the demonstration study to determine if there
were significant differences with respect to their
performance on a number of business policy
questions related to the simulated business.
Three business policy professors evaluated
these responses. The hypothesis test was based
on the overall average across the three
professors' grades because the three were
shown to be consistent - e.g., an inter-judge
reliability test showed that the three professors
did not differ significantly in their ratings.
The overall ability of the user and non-user
groups was also shown to be comparable in
terms of their overall quality point average in a
Master of Business Administration (MBA) program that most were completing. Thus, at least in
terms of this measure, the user and non-user
groups were similar.
The average score achieved by the user and nonuser groups was 21.04, with a standard deviation
of 5.28, and 21.64, with a standard deviation of
4.12, respectively. This leads to the acceptance
of the null hypothesis that system users and
non-users do not perform differently in
addressing decision-related policy issues.
Thus, the "bottom line" basis for evaluating the
SICIS system did not bear out its value in this
demonstration. This may be interpreted in
various ways. Clearly, behavior in the simulated
business environment in which the evaluation
was performed may not be closely related to
behavior in the real world which may raise
questions as to the external validity of the
experiment. Also, this result may be due to inadequate assessment of the decision performance
measurement. Such an assessment is at odds
with the typical purpose of a decision-support
MIS which is to provide continuing support for a
sequence of decisions. Hence, the partial application of evaluation methodology in this demonstration study may well be at fault in failing to
adequately assess decision performance.

Conclusion
The study demonstrates the potential feasibility
of making MIS evaluations on the basis of a
model using
conceptual
comprehensive
measures of attitudes, value perceptions, infor-

50 MIS Quarterly / September 1978

This content downloaded from 200.21.227.154 on Thu, 01 Oct 2015 03:31:42 UTC
All use subject to JSTOR Terms and Conditions

Evaluating MIS

mation usage, and decision perforinance.


Although the demonstration study was conducted in a simulated environment, measurements which are similar, although not necessarily identical, may be made as a part of realworld MIS design-development-implementation
processes.

[5]
[6]

[7]

[8]

The SICIS system which was the focus of this


study was not evaluated as highly as might have
been hoped for by its developers, which is likely
to be the case in most MIS evaluation efforts.
However, evaluations such as these serve to
guide further MIS evaluation efforts as well as to
suggest system refinements and characteristics
which future MIS's should entail.
The MIS evaluation process is thus a dynamic
one in which each systematic evaluation effort
can lead both to better information systems and
to improved evaluation methodologies. Hopefully, this dual payoff from system evaluations
will lead more organizations and system developers to conduct and report formal MIS
evaluations.

References
[1]
[2]

[3]
[4]

Cox, D. R. Planning of Experiments,


John Wiley, 1958.
G. B. Management
Davis,
Information
System:
Conceptual
Structure
and DevelopFoundations,
ment, McGraw-Hill,
1974.
King, W. R. "Methodological
in OR,"
Optimality
OMEGA, February, 1976.
R. G. "MIS Development
Murdick,
Procedures,"
Journal of Systems
December,
Management,
1970,
pp. 22-26.

[9]

Murdick, R. G. and Ross, J. E. Information Systems for


Modern Management
(2nd ed.), Prentice Hall, 1975.
Horst. " A Survey
Naylor, Thomas H. and Schauland,
of Users of Corporate
Models." ManagePlanning
ment Science,
Vol. 22, No. 9, May, 1976, pp. 927-937.
J. I. and King, W. R. "Strategic
Issue
Rodriguez,
Information
Competitive
Systems,"
Long
Range
Planning,
publication
pending.
Randall and Dennis, Slevin, (eds.). "ImpleSchultz,
Research
mentating
Operation
(see Chapter
7),"
Elsevier Co., 1975.
Science,
Management
Scott
M. "Strategy
for the Design
Morton,
and
of Interactive
Evaluation
for
Display
Systems
Management
Planning," in Kriebel, C., Van Home, R.,
and Heames,
T. (eds.),
Information
Management
Systems:
1971.

[10]
[11]

Problems

and

Perspectives,

Pittsburgh,

McGrawSiegal,
Statistics,
Sidney,
Nonparametric
Hill, 1956.
Triandis, H. C. Attitude and Attitude Change,
John
Wiley, 1971.

About the Authors


William R. King is Professor of Business
Administration in the Graduate School of
Business at the University of Pittsburgh. He is the
author of nine books and more than 70 technical
papers in the fields of MIS, Planning, and
Systems Analysis. His most recent book,
Strategic Planning and Policy, was published in
1978.
Jaime R. Rodriquez is manager of systems
planning for MASECA Industrial Group in
Monterrey, Mexico. He received his Ph.D. from
the University of Pittsburgh in MIS in 1977.

MIS Quarterly / September 1978 51

This content downloaded from 200.21.227.154 on Thu, 01 Oct 2015 03:31:42 UTC
All use subject to JSTOR Terms and Conditions

You might also like