Download as pdf or txt
Download as pdf or txt
You are on page 1of 21

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/233832692

An Empirical Test of the DeLone-McLean Model of Information System Success

Article  in  ACM SIGMIS Database · June 2005


DOI: 10.1145/1066149.1066152 · Source: DBLP

CITATIONS READS
346 4,862

1 author:

Juhani Iivari
University of Oulu
125 PUBLICATIONS   5,302 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

DDSR project - Discourses on design science research View project

Slowly retiring View project

All content following this page was uploaded by Juhani Iivari on 31 May 2014.

The user has requested enhancement of the downloaded file.


Abstract
An Empirical This paper tests the model of information system

Test of the success proposed by DeLone and McLean using a


field study of a mandatory information system. The re-
sults show that perceived system quality and
DeLone-McLean perceived information quality are significant predictors
of user satisfaction with the system, but not of system
Model use. Perceived system quality was also a significant
predictor of system use. User satisfaction was found

of Information to be a strong predictor of individual impact, whereas


the influence of system use on individual impact was
insignificant.
System Success ACM Categories: J.1, K.6.2
Keywords: Information System Success, Infor-
mation System Quality, System Quality, Information
Juhani Iivari Quality, User Satisfaction, Use, Individual Impact
University of Oulu
Introduction
Seddon et al. (1999) estimate that the total annual
worldwide expenditure on information technology (IT)
probably exceeds one trillion US dollars per year and
is growing at about 10% annually. At the same time,
information systems are pervading almost all aspects
of human life. In view of the high investments in IT
and its ubiquity, the success of such investments and
the quality of the systems developed is of the utmost
importance both for research and in practice.
This paper focuses on the success of individual
information system applications. Following
Gustafsson et al. (1982), we interpret an information
system (IS) as a computer-based system that
provides its users with information on specified topics
in a certain organizational context. DeLone and
McLean (1992) proposed in their influential paper a
framework for IS success measures that distinguishes
system quality, information quality, user satisfaction,
use, individual impact and organizational impact.
They also suggested a causal model for the success
measures.
Despite the considerable interest in the DeLone-
1
McLean model , there is a dearth of studies that test it
empirically. DeLone and McLean (2002) identify only
sixteen empirical studies that have explicitly tested
some of the associations of the original DeLone-
McLean model. Among them Seddon and Kiew
Acknowledgment (1994) revised it considerably, by deleting system use
and substituting perceived usefulness. In our view
I wish to express my gratitude to Minna Perälä, M.Sc.,
perceived usefulness reflects more the individual
for the data collection, and especially to Prof. Wynne
impact (Rai et al., 2002), i.e. the impact of the system
Chin for his comments and for helping me to use
PLS.
1
The Science Citation Index, Social Science Citation Index and
This paper was submitted in February of 2002. Wynne Chin
Arts & Humanities Index identify 235 references to the article (as
served as the Senior Editor.
of January 10, 2002).

8 The DATA BASE for Advances in Information Systems - Spring 2005 (Vol. 36, No. 2)
2
on a user’s performance of his/her job. The idea of system quality and information quality. Otherwise the
this paper is to test the DeLone-McLean model while relationship between system/information quality and
sticking more faithfully to its original form. Leidner user satisfaction would be an artifact of measure-
(1998) reports a partial test of the model in the case ment.
of Executive Information Systems, and more recently,
Seddon (1997) claims that the DeLone-McLean
Rai et al. (2002) tested both the DeLone-McLean
model is ambiguous in the sense that one component
(1992) model and the Seddon (1997) model, reporting
of it, use, has three potential meanings (Table 1). His
reasonable support for both.
conclusion is that only Meaning 1 is justified in the
The composition this paper is as follows: Section 2 light of the objections listed in the second column of
discusses the theoretical background; Section 3 Table 1. To me all these points of criticism seem
introduces the research method; Section 4 describes questionable. His criticisms of Meaning 2 and
the results; Section 5 discusses the results; and Meaning 3 refer to the distinction between a variance
3
Section 6 concludes the paper. model and a process model (Mohr, 1982). Without
going into the details of this distinction, it is obvious
Theoretical Background that even though IS use as a process is assumed to
lead to individual impact and organizational impact, it
The DeLone-McLean Model for IS Success is not necessary to regard it as a discrete event to be
The DeLone-McLean model for IS success, described stated (use vs. non-use), as implied by process
4
in Figure 1, assumes that system quality and theories (Mohr, 1982). This paper interprets use as
information quality, individually and jointly, affect user the amount of use, which may be considered one
satisfaction and use. It also posits use and user satis- measure of IS success.
faction to be reciprocally interdependent, and DeLone and McLean (1992) characterize individual
presumes them to be direct antecedents of individual impact as “an indication that an information system
impact, which should also have some organizational has given a user a better understanding of the
impact. decision context, has improved his or her decision-
DeLone and McLean (1992) characterize system making productivity, has produced a change in user
quality as desired characteristics of the information activity, or has changed the decision maker’s
system itself, and information quality as desired perception of the importance or usefulness of the in-
characteristics of the information product. More formation system” (p. 69). Seddon (1997) reinterprets
concretely, they incorporate four scales from the individual impact to mean benefits accruing to
Bailey-Pearson (1983) instrument into system quality individuals from use. Even though both DeLone and
(convenience of access, flexibility of the system, inte- McLean (1992) and Seddon (1997) implicitly
gration of the system and response time) and nine presuppose that individual impact is of benefit to the
scales into information quality (accuracy, precision, user, this paper interprets individual impact as
currency, timeliness, reliability, completeness, referring to a unit of analysis rather than the benefi-
5
conciseness, format and relevance). ciary of the impact.

Much of the research on User Information


Satisfaction has concerned users’ satisfaction with
specific features of a system (Doll & Torkzadeh,
1988; Iivari & Koskela, 1987) or IS function (Bailey &
Pearson, 1983; Baroudi & Orlikowski, 1988), covering
features of both system quality and information
quality. Even though the inclusion of service quality in 3
As Mohr (1982, p.44) points out, he uses the term “process
the updated DeLone and McLean (2002) model theory” in a highly specific meaning. “Process theory” does not
imply that variance theories cannot address processes (e.g. IS
reflects IS functions or IS organizations rather than IS acceptance).
application, the following will focus on the success of 4
We interpret the DeLone-McLean model as based on the
IS applications only. User satisfaction in DeLone and reasoning that a system that is not used at all does not have any
McLean (1992) refers to the overall user satisfaction individual or organizational impact. On the other hand, the DeLone-
McLean model also allows the hypothesis that more use is
(Seddon & Kiew, 1994) measured independently of associated with more individual impact, which follows the “logic” of
variance theories.
5
The likely explanation for this assumption is that the above
authors implicitly assume that use of the system is voluntary. In that
case a user will hardly continue to use a system if he or she does
2
Davis’ original measure for perceived usefulness was developed not perceive its use as beneficial. However, according to my
to assess ex ante expectations of individual impact. We focus here, reading, DeLone and McLean (1992) do not explicitly restrict their
however, on ex post individual impact measured after six months’ model to voluntary systems, although they do note that actual use
experience of use. makes sense only when system use is voluntary.

The DATA BASE for Advances in Information Systems - Spring 2005 (Vol. 36, No. 2) 9
System
Use
Quality

Individual Organizational
Impact Impact

Information User
Quality Satisfaction

Figure 1. The DeLone-McLean Model for IS Success

Meaning Seddon’s objections Counter-objections


Meaning 1: Benefits (The only justified mean- What is the meaning of “use” in “benefits from
from use ing) use”?
Meaning 2: Use as IS success must bring Can’t a system (e.g. a piece of free software
the dependent vari- benefits to somebody. such as Linux) be genuinely considered a suc-
able in a variance cess when it is widely used without any consi-
model of future use deration of its benefits or disadvantages to
different stakeholders?
Meaning 3: Use as IS use is a process con- Even though IS use as a process is assumed
an event in a pro- struct that should not have to lead to individual impact and organizational
cess leading to indi- any place in a variance impact, it is not necessary to regard it as a
vidual or organiza- model predicting IS discrete event to be stated (use vs. non-use),
tional impact success. as implied by process theories.

Table 1. The Three Meanings of IS Use in the DeLone-McLean Model, According to Seddon (1997)

Following DeLone and McLean (1992) and Rai et al. even though the causal explanation of the relationship
(2002), we will specifically focus in this paper on the is not totally clear. The criticism of Seddon (1997),
effect of an information system on the work perform- even though we do not accept it in its entirety, shows
ance of individual users as measured by perceived that some of the assumed causal relationships in the
6
usefulness. DeLone-McLean model are arguable and the model
is incomplete. In particular, the model misses the
Hypotheses feedback loops from individual impact and organiza-
DeLone and McLean (1992) introduce the model tional impact to user satisfaction and use. We
shown in Figure 1 primarily as a causal-explanatory interpret the DeLone-McLean model primarily as a
model of how system quality and information quality predictive one that is worth testing empirically. Based
affect use and user satisfaction, how use and user on the DeLone-McLean model, we propose to test the
satisfaction, affecting each other reciprocally, are hypotheses depicted in Figure 2 in the present paper.
direct antecedents of individual impact, and how It is hypothesized in Figure 2 that system quality and
individual impact leads to organizational impact. As information quality are positively associated with user
an alternative, one could emphasize more the satisfaction. Hypothesis H1 assumes that ceteris
predictive nature of the model, how the preceding paribus the higher the system quality is perceived to
variables help to predict the dependent variables, be by users, the more satisfied they are with the
system. Similarly, Hypothesis H2 posits that ceteris
6
We do claim that perceived usefulness covers all aspects of paribus the higher the information quality is perceived
individual impact. DeLone and McLean (1992) specifically focus on to be by users, the more satisfied they are with the
decision-makers as users of an information system. Assuming that system. If user satisfaction is interpreted as an
the work of decision-makers is to make decisions, perceived
usefulness essentially covers the impact on decision-making attitude (Baroudi et al., 1986), hypotheses 1-2
productivity. Perceived usefulness nevertheless misses those essentially argue that the attitude is dependent on
aspects of individual impact which do not directly concern work perceptions of the attitude object (Fishbein & Ajzen,
performance, e.g. the impact on the quality of work (Iivari, 1997).

10 The DATA BASE for Advances in Information Systems - Spring 2005 (Vol. 36, No. 2)
1975; McGuire, 1969). There is considerable At a general level, there is a considerable body of em-
empirical evidence for these hypotheses. pirical research on the relationship between UIS
(measured in terms of attributes such as system
quality and information quality) and IS use, which
H3:+ Actual use
System suggests that the relationship is positive but relatively
quality weak (Amoroso & Cheney, 1991; Barki & Huff, 1985;
H7:+
H1:+ Baroudi et al., 1986; Ginzberg, 1981; Igbaria, 1990;
Igbaria & Zviran, 1991; Nelson & Cheney, 1987;
H5a:+ H5b:+ Individual
impact Srinivasan, 1985). Baroudi et al. (1986), for example,
H4:+
H6:+
found a correlation of 0.28 between UIS and IS use,
User and Barki and Huff (1985) 0.39. More specifically re-
Iinformation
quality H2:+ satisfaction lated to Hypothesis H3, the Technology Acceptance
Model (Davis et al., 1989) predicts that perceived
ease of use, as an aspect of system quality (DeLone
Figure 2. The Model to Be Tested & McLean, 1992), is a significant direct and indirect
determinant of use, the indirect effect being
Many of the instruments developed to measure User channelled through perceived usefulness. After a de-
Information Satisfaction (UIS) in terms of attributes cade of intensive research into TAM, there is
such as system quality and information quality (e.g. significant empirical evidence of the indirect effect of
Bailey & Pearson, 1983; Ives et al., 1983; Doll & perceived ease of use (Davis, 1989; Davis et al.,
Torkzadeh, 1988) have used an independent 1989; Mathieson, 1991; Adams et al., 1992; Davis et
measure of overall user satisfaction to test the al., 1992; Igbaria et al., 1995; Igbaria & Iivari, 1995;
predictive validity of the measure. They have Chau, 1996; Igbaria et al., 1996; Szajna, 1996; Taylor
consistently reported significant correlations between & Todd, 1995, Gefen & Straub, 1997; Igbaria et al.,
UIS or its factors and the independent measure of 1997; Straub et al., 1997; Gefen & Keil, 1998;
overall satisfaction. Doll and Torkzadeh (1988), for Karahanna & Straub, 1999; Karahanna et al., 1999;
example, found correlations varying between 0.51 Venkatesh & Davis, 2000), whereas the direct effect
and 0.65 between the twelve items of their end-user is much more controversial (Gefen & Straub, 2000).
satisfaction measure and the criterion variable that Consistent with the above, Rai et al. (2002) report a
can be interpreted as overall satisfaction. The relatively low path coefficient between ease of use
correlations between the five factors (content, and system dependence (used to measure system
accuracy, format, ease of use and timeliness) and the use) (ß = 0.09, p ≤ 0.10) and a somewhat higher
criterion varied between 0.55 and 0.69, and that coefficient between information quality and system
between the 12-item instrument and the criterion dependence (ß = 0.18, p ≤ 0.01). Leidner (1998)
variable was 0.76. Further, Seddon and Kiew (1994) reports positive associations between perceived EIS
found in their path analysis that information quality information quality and frequency of EIS use (ß =
and system quality are significant determinants of 0.38, p ≤ 0.01) and between perceived EIS
overall user satisfaction (both path coefficients information quality and EIS use for internal monitoring
7
significant at the level 0.001). Similarly Rai et al. (ß = 0.34, p ≤ 0.01), but not between EIS information
(2002) report significant path coefficients between quality and EIS use for external monitoring nor
ease of use (used to measure system quality) and between EIS information quality and EIS use for
user satisfaction (ß = 0.30, p ≤ 0.01) and between communication. Despite these somewhat inconsistent
information quality and user satisfaction (ß = 0.52, p ≤ findings, we will follow the original model of DeLone
0.01) in their LISREL analysis of the DeLone-McLean and McLean (1992) as expressed in Hypotheses H3
model. and H4.
DeLone and McLean (1992) hypothesize that the As mentioned above, DeLone and McLean (1992)
higher the system quality, the more the system is posit that use and user satisfaction are reciprocally
used (Hypotheses H3) and the higher the information interdependent. To test this reciprocal dependence
quality, the more the system is used (Hypothesis H4). fully, one should have a piece of research in which
use and user satisfaction are followed over time. This
7 paper is confined to a single point in time, however,
System quality was measured using two items from Doll and
Torkzadeh (1988), four items of ‘ease of use’ from Davis (1989) although hypotheses 5a and 5b, which are to be
and three additional items. The measure of information quality tested separately, do attempt to capture this
consisted of ten items from the Doll and Torkzadeh (1988) reciprocal dependence.
instrument. (Overall) satisfaction with the system was measured
using four items: how adequately the application meets the Hypothesis H5a predicts that the more satisfied users
information processing needs, how efficient it is, how effective it is are with the system, the more they will use it. Baroudi
and overall satisfaction.

The DATA BASE for Advances in Information Systems - Spring 2005 (Vol. 36, No. 2) 11
et al. (1986) suggest that if user satisfaction is that a user’s satisfaction will not be affected by the
interpreted as an attitude, the Theory of Reasoned new system. One could conjecture that dissatisfaction
Action (Fishbein & Ajzen, 1975) supports the model with the new system might lead to higher satisfaction
that user satisfaction will influence intentions to use with the old system, whereas high satisfaction with
the system and actual use (Hypothesis 5a). As an the new system may reduce satisfaction with the old
8
alternative, they identify Dissonance theory (Fishbein system. Further, if the user is dissatisfied with the
& Ajzen, 1975), which suggests that IS use leads to new system after the replacement, high satisfaction
user satisfaction (Hypothesis H5b). The results of with the old system may explain the likelihood of its
path analysis supported the model that user future use.
satisfaction leads to system use rather than vice
Contrary to Chin and Lee (2000), Seddon (1997) also
versa (Baroudi et al., 1986). Interestingly, they explain
assumes that user satisfaction reflects past
the model as follows: “The model assumes that as
experience with the system and does not include
use demonstrates that a system meets a user’s
expectations. Iivari (1987) points out, however, that if
needs, satisfaction with the system should increase,
user satisfaction is interpreted as the user’s belief in
which should further lead to greater use of that
the degree to which the information system (at the IS
system. Conversely, if system use does not meet the
schema level) is capable of satisfying his or her
user’s needs, satisfaction will not increase and further
information requirements (at the IS schema level)
use will be avoided.” This explanation suggests that,
(Ives, et al., 1983), the distinction between ex post
causally, IS use precedes user satisfaction, or that the
(experience-based) and ex ante (expectation-based)
relationship is reciprocal, as assumed by DeLone and
interpretation disappears. At the same time, he
McLean (1992). In fact, Torkzadeh and Dwyer (1994)
argues that in this case user satisfaction is
found a path coefficient of 0.21 (p < .05) from user
consequent upon acts of using a system rather than
satisfaction to usage and a coefficient of 0.37 (p <
being an antecedent, especially when one takes into
.05) from usage to user satisfaction in their LISREL
account alternative and complementary information
analysis. Rai et al. (2002) report a significant path
systems. The explanation is that information needs in
coefficient (ß = 0.35, p ≤ 0.01) from user satisfaction
a specific use situation may differ from information
to system dependence, used to measure system use.
requirements imposed on the system at the schema
The relatively low association between actual use and level. A manager, when he or she has time, may
user satisfaction may be explained by the complexity prefer to visit the shop floor to inquire directly from the
of the relationship. Chin and Lee (2000) propose that employees about the state of production, because he
overall user satisfaction is composed of expectation- or she wishes to receive “soft” knowledge of a kind
based satisfaction and desire-based satisfaction. Ex- that a formal information system is assumed to be
pectation-based satisfaction is a direct and incapable of supplying. He or she may nevertheless
multiplicative combination of the overall expectation be totally satisfied with the reporting system and use it
discrepancy between prior expectation and post hoc when there is no opportunity to visit the shop floor. At
perceptions of the system and the overall evaluation the same time, Iivari (1987) acknowledges that user
of this expectation discrepancy. Similarly, desire- satisfaction may correlate highly with IS use. We also
based satisfaction is a direct and multiplicative claim that it may predict IS use.
combination of the overall desire discrepancy be-
Hypothesis 6 predicts that user satisfaction will be
tween prior desires and post hoc perceptions of the
positively associated with individual impact. The
system and the overall evaluation of this desire
meaning of this hypothesis depends on the
discrepancy.
interpretation of user satisfaction which users employ.
Seddon (1997) reasons that although user Goodhue (1986) defines “IS satisfactoriness” as the
satisfaction may have an association with IS use in a individual’s belief in the correspondence or fit
steady state, this will break down in a situation of between job requirements and IS functionality, and
system replacement. Favourable satisfaction with the “IS satisfaction” as the correspondence between the
old system is not sufficient “to cause use of the old information system’s intrinsic benefits of use, such as
system”. Although this may be true, it is obvious that providing a sense of accomplishment due to a crisp,
user satisfaction is never the only cause of system attractive output, and the needs of the individual. Iivari
use. At a minimum, the system must be accessible to
a user. Seddon’s (1997) reasoning assumes that user
8
satisfaction with the old system is not affected by the Relative advantage, i.e. “the degree to which an innovation is
perceived as being better than the idea it supersedes” (Rogers,
new system. To the author’s knowledge, this issue 1995, p. 212) captures this idea of comparing the old and new
has not been studied empirically, but one could systems. Our conjecture above would suggest that relative
speculate that a user would compare the old and new advantage is a dynamic concept that emerges in the form of an
systems. After this comparison it seems quite unlikely interaction between the levels of satisfaction with the two
systems.

12 The DATA BASE for Advances in Information Systems - Spring 2005 (Vol. 36, No. 2)
and Ervasti (1994) propose that when user functionality of the system were the most significant
satisfaction is interpreted in the sense of “IS predictors, whereas documentation was the least
satisfactoriness” rather than “IS satisfaction,” the link significant. The paper of Seddon and Kiew (1994)
between user satisfaction and IS effectiveness can be also analyses the relationship between user
established directly without the intervening variable of satisfaction and individual impact when the latter is
IS use. If user satisfaction is the user’s best estimate interpreted as perceived usefulness. They report a
9
of the match between the requirements imposed on correlation 0.70 (p < 0.001) between them.
the system by his or her work and the system capa-
Finally, Hypothesis 6 predicts that IS use is positively
bilities, the positive association between user
associated with individual impact. Theoretically, this
satisfaction and individual impact is quite
relationship can be argued based on the reasoning
understandable. If the user’s interpretation of these
that a system that is not used at all will not have any
requirements and estimate of the match between
impact on individual performance. Furthermore, one
them and the system capabilities are correct,
could also expect that a system that is used more will
increased user satisfaction should be positively
have higher impact on users’ performance. Chin and
associated with task performance.
Marcolin (2001) point out, however, that the impact of
There are relatively few studies that have investigated initial usage on individual productivity may differ from
empirically the relationship between user satisfaction that of continued usage. The interest of the present
and individual impact, especially when we focus on paper lies in the continued use of a system rather
the effect on individual job performance. DeLone and than its initial use. There seems to be a paucity of
McLean (1992) identify four papers that comprise research into the relationship between IS use and
both user satisfaction and individual impact criteria, individual impact, but the existing evidence seems to
among which only Cats-Baril and Huber (1987) support this hypothesis. DeLone and McLean (1992)
actually address the relationship, reporting identify seven studies that address both system use
correlations between two measures of satisfaction on and individual impact. Among these, Srinivasan
the one hand and the quality and productivity of (1985) reports time per session (connect time) and
decision-making on the other hand. Productivity was user type (light, average, heavy), among four
measured in terms of the number of objectives indicators of system use, to be significantly correlated
generated, the number of alternatives generated and with the problem solving capabilities of the user (p ≤
the number strategies prioritized. The authors do not 0.10 and p ≤ 0.01, respectively). Snitkin and King
report the significances, but all the correlations were (1986) found a significant association between sys-
negative, albeit relatively low in absolute terms. One tem usage and perceived effectiveness (p ≤ 0.05).
of the eight correlations was just over 0.30 in absolute More recently, Iivari (1996) found CASE usage to
terms. More recently, Gatian (1994) conducted a have a significant effect on the productivity of
LISREL analysis of the relationship between user individual users (systems developers) and on the
satisfaction, decision performance and user efficiency quality of their products. Leidner (1998) reports that
in the case of direct and indirect users of the same EIS use to monitor internal information and EIS use to
system in 39 organizations. In the case of direct monitor external information were both positive and
users, she found a close association between user significant predictors of individual decision making
satisfaction and decision performance (r = 0.64, ß = speed (ß = 0.34, p ≤ 0.05; ß = 0.24, p ≤ 0.05,
0.64) and similarly between user satisfaction and respectively), mental mode enhancement (ß = 0.38, p
efficiency (r = 0.68, ß = 0.97), but in the case of ≤ 0.01; ß = 0.28, p ≤ 0.05, respectively) and extent
indirect users only the relationship between user analysis in decision making (ß = 0.36, p ≤ 0.01; ß =
satisfaction and user efficiency was of interest. This 0.25, p ≤ 0.05, respectively). On the other hand,
was also found to be significant (r = 0.81, ß = 0.81). frequency of EIS use was not a significant predictor of
Her findings may partly be inflated by the fact that her any of the three aspects of individual impact.
user satisfaction measure was adapted from the
On the other hand, there is an abundant literature on
Jenkins-Ricketts (1979) instrument and decision
the effect of perceived usefulness on system use
performance from Sanders (1985). User efficiency
(Lee, 2003; Ma and Liu, 2004), where perceived
was also evaluated in terms such as user’s data
usefulness is interpreted as an ex ante belief
processing correctness, report preparation and
(expectation) regarding “the degree to which a person
distribution timeliness. Etezadi-Amoli and Far-
believes that using a particular system would enhance
hoomand (1996) report that six factors of end user
computing satisfaction (documentation, ease of use,
9
functionality, quality of output, support and security) The high correlation between overall satisfaction and perceived
explained 50% of the variance in end user usefulness may be explained by the fact that two of the four items
of user satisfaction, concerning the efficiency and effectiveness of
performance. They also found that satisfaction with the application, may be interpreted in terms of perceived
the quality of the output and satisfaction with the usefulness.

The DATA BASE for Advances in Information Systems - Spring 2005 (Vol. 36, No. 2) 13
his or her job performance” (Davis, 1989). As noted convenience of access, and language. Similarly,
above, there is significant empirical evidence that per- information quality adopted six scales from Bailey and
ceived usefulness has a positive effect on attitudes Pearson (1983): completeness, precision, accuracy,
towards use, behavioural intention to use and actual reliability, currency, and format of output. Each scale
use. Perceived usefulness as an antecedent of was measured using four items, as proposed by the
system use serves to point out that the relationship source authors.
between system use and individual impact is probably
User satisfaction was measured using the six items of
not uni-directional but that a user’s experience of
general reactions suggested by Chin et al. (1988) and
individual impact (in terms of its effect on his/her job
actual use in terms of daily use time and frequency of
performance) will affect his/her belief in the perceived
use. Individual impact was confined to impact on the
usefulness of a system and in that way its use.
user’s work performance and was measured with an
adaptation of the 6-item instrument for perceived
Research Methodology usefulness suggested by Davis (1989).
10
The Field Study
The model of Figure 2 was tested as a part of a Data Analysis
larger, longitudinal field study in Oulu City Council, Gefen et al. (2000) provide a recent comparison of
which is a municipal organization of about 7500 traditional regression analysis and two classes of
employees. The organization in question formed a structural equation modelling, covariance-based and
concrete setting where some of its employees were partial-least-square-based, which are potentially
working on the adoption of a new information system suitable for testing the model of Figure 2. Their
and shaping its organizational acceptance. We saw it guidelines suggest that covariance-based structural
as an opportunity to study a specific case of “real” equation modelling approaches such as LISREL,
users’ acceptance of a “real” system in “real” time. EQS and AMOS do not suit the case at hand, for two
Even though the study focuses only on one system in major reasons. Firstly, they are suitable for
one organization, the research setting can be confirmatory rather than exploratory analyses and
expected to increase the internal validity of the study require a strong underlying theory. Actually, the
in particular and to some extent its external validity as covariance-based structural equation modelling is
well (Jenkins, 1985). The choice of one organization oriented towards causal modelling and theory testing
controls for possible confounding effects of organiza- rather than prediction (Chin & Newsted, 1999), which
tional level variables such as institutional constraints is the major, purpose of the present paper. As pointed
and infrastructure arrangements, which may have an out by Seddon (1997), the underlying theory of the
influence on individual adoption and acceptance, DeLone-McLean model is not very strong. Secondly,
making it more likely that micro-level effects will be the number of cases in the present material, 68, is
detected (Karahanna et al., 1999). quite small for covariance-based structural equation
Oulu City Council renewed its financial and account- modelling methods, which also impose tighter
ing systems at the beginning of 1997 as an outcome statistical assumptions than regression analysis and
of a nation-wide reform of municipal financial and partial-least-square-based methods.
accounting systems. As a part of this reform, it The hypothesized relationships among the study
acquired an application package from a major vendor variables depicted in Figure 2 were tested by the
in Finland, including accounting, sales receivable, Partial Least Squares (PLS) method, which is
payments receivable and invoicing. The field study particularly well suited for predictive applications and
was targeted at about 100 primary users of the sys- theory building (Chin & Newsted, 1999). It does not
tem who participated in the training provided by the imply parametric assumptions of multivariate normal
vendor in October and November 1996, of whom 78 distribution, and the sample size can be small, the
agreed to participate. Data collection, based on minimum being ten times of the number of items in
questionnaires, was conducted during summer 1997 the most complex construct in the model (Chin, 1998;
after half a year of experience with the system. Gefen et al., 2000).
Measurement of the Variables PLS recognizes two components of a causal model:
the measurement model and the structural model. A
The questionnaire was based as much as possible on structural model consists of the unobservable, latent
standard measures (see Appendix A), the questions constructs and the theoretical relationships among
being translated into Finnish. System quality was
measured using six scales adopted from Bailey and 10
Observe, however, that perceived usefulness is mainly used to
Pearson (1983): flexibility of the system, integration of measure ex ante expectations of the system’s impact in terms of
the system, response/turnaround time, error recovery, speed of accomplishing tasks, job performance, productivity,
effectiveness, ease of job and usefulness in work (Davis, 1989).

14 The DATA BASE for Advances in Information Systems - Spring 2005 (Vol. 36, No. 2)
them. Testing this includes estimating the path coeffi- correlations between latent variables are listed in
cients, which indicate the strengths of the Table 2. The two models gave very similar results.
relationships between the independent and de- Only a few correlations between latent variables
11
pendent variables. Furthermore, for each construct in differed by the absolute value of 0.01.
a structural model, there is a related measurement
The weights and loadings of the indicators in both
model which links the latent construct in the diagram
models are reported in Appendix B.
with a set of observed items. The measurement
model consists of the relationships between the The internal consistencies of all the latent constructs,
observed variables (items) and the latent constructs examined using the formula developed by Fornell and
which they measure. The characteristics of this model Larcker (1981), clearly exceeded the cut-off value
demonstrate the construct validity of the research 0.70 proposed by these authors. Convergent validity
instruments, i.e. the extent to which the operationali- is considered adequate when the average variance
zation of a construct actually measures what it pur- extracted is 0.50 or more (Fornell & Larcker, 1981).
ports to measure. Two important dimensions of As Table 2 shows, the minimum average variance
construct validity are (a) convergent validity, including extracted was 0.54. For satisfactory discriminant
reliability, and (b) discriminant validity. Together, the validity, the average variance shared between a
structural and measurement models form a network construct and its measures should be greater than the
of constructs and measures. The item weights and variance shared by the construct and any other
loadings indicate the strengths of the measures, while constructs in the model (Chin 1998). As Table 2
the estimated path coefficients indicate the strengths shows, the square roots of the average variance
and signs of the theoretical relationships. extracted (the lower values on the diagonal) exceed
the corresponding off-diagonal correlation values in
More specifically, a molar approach (Bagozzi, 1985;
the corresponding rows and columns in all cases
Chin & Gopal, 1995) was adopted to the testing of the
except for perceived system quality and perceived
model in Figure 2. Both system quality and
information quality, where the violations concern their
information quality were considered to be second
antecedents in the molar model of Figure 3. This
order concepts influenced by six scales each (see
indicates adequate discriminant validity for the
Figure 3). To measure system quality, the 24 items
constructs.
covering flexibility, system integration, response time,
recoverability, convenience, and common language As shown in Table 2, the correlations between
were used as reflective measures (Chin, 1998). flexibility, system integration, response time,
Similarly, the 24 items covering completeness, preci- recoverability, convenience and command language
sion, accuracy, consistency, currency, and format vary between 0.35 and 0.76. Analysis of collinearity
were used to measure information quality. between them showed that the lowest tolerance value
was 0.22, which clearly exceeds the cut-off value 0.10
Results recommended by Hair et al. (1992). Similarly, the
correlations were between 0.51 and 0.93. The
The model of Figure 2 was tested using PLS-Graph,
tolerance value was 0.12, which is very close to the
version 03.00 software. As Hypothesis H5 in Figure 2
above cut-off value.
includes a mutual influence between use and user
satisfaction that could not be tested at the same time,
Structural Models
we tested two models: Model 1, which assumed the
influence to be from user satisfaction to actual use The tests performed on the structural models gave
(H5a), and Model 2 which worked from actual use to the results depicted in Figure 3. The upper path coef-
user satisfaction (H5b). ficients and R2 values give the results for Model 1
and the lower ones for Model 2. The path coefficients
Measurement Models from the antecedents of perceived system quality and
To test the measurement models Model 1 and Model perceived information quality were the same in both
2, we examined (1) individual item loadings, (2) models and are therefore not repeated. The bootstrap
internal consistency (reliability of measures), (3) resampling technique (500 resamples) was used to
convergent validity, and (4) discriminant validity. All determine the significances of the paths within the
the item loadings except for five of the 24 items structural model. As shown in Figure 3, perceived
concerning perceived system quality and four of the system quality is a very significant predictor of user
24 items concerning perceived information quality ex- satisfaction in both models (ß = 0.55 and 0.48,
ceeded the threshold value of 0.70. The internal con-
sistencies of the latent variables, the average 11
In these two cases, Table 1 lists the higher correlations in
variances explained (on the diagonal) and the absolute values.

The DATA BASE for Advances in Information Systems - Spring 2005 (Vol. 36, No. 2) 15
The paths from system quality and information quality
Flexi- to user satisfaction and from user satisfaction to
Inte- bility individual impact in particular emerged in the manner
gration
hypothesized by the DeLone-McLean model. On the
0.19***
Res- 0.21*** 0.18 other hand, the paths from system quality and in-
ponse 0.14
time formation quality to actual use and from actual use to
0.19*** Per-
ceived individual impact were not significant. This negative
0.30* Actual
Re-
system
quality
0.45** use finding may be explained by the mandatory nature of
cover- 0.23*** the system, which may inflate the significance of
ability
0.22*** 0.15 actual use in the model. Our test of the DeLone-
0.15
Con.
0.16***
0.55***
McLean model was therefore limited, and there is
veni.
ence
0.48*** need to test it in the case of more voluntary systems.
Langu- Indivi-
age dual As explained in section 3.2, we do not claim that the
Impact
DeLone and McLean model provides a complete
0.28Õ 0.14Õ
Comple-
0.35
picture if interpreted as a causal-explanatory model. It
teness
Preci-
0.35 is unclear, for example, whether our finding that user
-0.19
sion
-0.12 0.52*** satisfaction predicts individual impact implies that
0.19*** 0.52***
0.20***
user satisfaction in some sense explains individual
Accu-
racy
impact or vice versa. Even though Seddon (1997)
0.18*** Per-
User
claims that “in the long run it is people’s observations
ceived
information
0.26*
0.28*
satis- of the outcomes of use, the impacts, that determine
Con. quality faction
sis- 0.23*** their satisfaction with the system, not vice versa” (p.
tency
0.57
243), we see this more as a research issue that is
0.22***
0.16***
0.58 quite complicated by nature. Our results may be
Cur- interpreted as indicating that users perceive user
rency
Format satisfaction in terms of “IS satisfactoriness” or as a
weighted combination of “IS satisfactoriness” and “IS
satisfaction” rather than “IS satisfaction” alone
(Goodhue, 1986). Assuming that user satisfaction is
Figure 3. Results of the Structural Analyses the user’s best estimate of the match between the
requirements imposed on the system by his or her
respectively; p ≤ 0.001 in both cases), while perceived work and the system’s capabilities, a positive relation-
information quality is also a significant predictor (p ≤ ship between user satisfaction and individual impact
0.05) of user satisfaction in both models. Perceived is quite understandable. If the user’s interpretation of
system quality is also a significant direct predictor of those requirements and estimate of the match
system use both in Model 1 (ß = 0.30, p ≤ 0.05) and between requirements and the system’s capabilities
in Model 2 (ß = 0.45, p ≤ 0. User satisfaction is an are correct, increased user satisfaction should be
almost significant predictor of actual use in Model 1 (ß positively associated with task performance.
= 0.28, p ≤ 0.10), and conversely actual use is an al-
most significant predictor of user satisfaction in Model Our test of the DeLone-McLean model was
2 (ß = 0.14, p ≤ 0.10). Most notably, user satisfaction incomplete in the sense that we did not include
is a strong predictor of individual impact in both organizational impact. In addition to the data analysed
models (ß = 0.52, p ≤ 0.001 in both), whereas actual above, we had access to managers’ evaluations of
use is insignificant as a predictor of individual impact. the organizational impact of the system (n = 38) in 15
12
Overall, both models explain a considerable portion of user departments. Organizational impact was
the variance in both user satisfaction (R2 = 0.57 in evaluated in terms of the impact of the system on the
unit’s output, the quality of this output and the unit’s
Model M1 and R2 = 0.58 in Model 2) and individual innovativeness, reputation for excellence and morale
impact (R2 = 0.35 in both models). (Van de Ven & Ferry, 1980). Factor analysis of the
five items based on the 38 responses gave only one
factor.
Discussion
The findings regarding the seven hypotheses
derived from the DeLone-McLean model (1992)
proposed in section 3.2 are summarized in Table 3.
12
Overall, the results support the reasonableness of The 38 managers’ evaluations were largely independent of the
the DeLone-McLean model as a predictive model. 78 user responses. There were only four common respondents in
the two samples.

16 The DATA BASE for Advances in Information Systems - Spring 2005 (Vol. 36, No. 2)
Inter-correlations of latent constructs and average variances extracteda

Int. Consistency
No of Items

Com language

Completeness
Recoverability

Convenience
Construct

Consistency
Ind Impact

Integration

Response

Accuracy
Precision
Flexibility

Currency

Format
PSQ

USE
PIQ

UIS
Perceived 24 0.97 0.54
Syst. Quality 0.74
(PSQ)
Perceived Inf. 24 0.98 0.62
Quality (PIQ) 0.71 0.79
User Inf. 6 0.91 0.62
Satisfaction 0.73 0.65 0.80
(UIS)

Use 2 0.92 0.85


0.37 0.21 0.38 0.92
Individual 6 0.95 0.78
impact 0.56 0.45 0.58 0.34 0.88
Flexibility 4 0.96 0.85
0.78 0.57 0.57 0.29 0.55 0.92

Integration 4 0.95 0.83


0.79 0.52 0.53 0.38 0.38 0.59 0.92
Response 4 0.93 0.77
time 0.88 0.65 0.61 0.25 0.50 0.66 0.58 0.88
Recover- 4 0.95 0.82
ability 0.90 0.59 0.65 0.30 0.52 0.61 0.70 0.76 0.90
Conven- 4 0.94 0.79
ience 0.86 0.60 0.76 0.42 0.56 0.64 0.52 0.71 0.72 0.89
Command 4 0.91 0.73
language 0.76 0.62 0.51 0.19 0.28 0.35 0.52 0.71 0.65 0.65 0.85
Complete- 4 0.97 0.89
ness 0.48 0.83 0.47 0.15 0.29 0.47 0.33 0.47 0.37 0.37 0.40 0.94
Precision 4 0.98 0.91
0.67 0.86 0.59 0.11 0.38 0.63 0.51 0.58 0.56 0.51 0.56 0.72 0.95

Accuracy 4 0.97 0.89


0.64 0.89 0.56 0.11 0.38 0.48 0.45 0.59 0.56 0.55 0.59 0.62 0.71 0.94
Consistency 4 0.97 0.90
0.67 0.90 0.59 0.18 0.44 0.48 0.52 0.63 0.58 0.57 0.58 0.63 0.69 0.93 0.95
Currency 4 0.98 0.93
0.51 0.85 0.58 0.31 0.41 0.36 0.35 0.47 0.42 0.44 0.44 0.72 0.59 0.71 0.73 0.96
Format 4 0.95 0.83
0.68 0.71 0.54 0.21 0.41 0.55 0.54 0.58 0.57 0.56 0.56 0.51 0.65 0.51 0.51 0.51 0.91
a The upper bold number on the diagonal is the average variance extracted and the lower one its square root.

Table 2. Internal Consistencies, Average Variances Extracted and Inter-Correlations between Constructs,
Part 2 of 2.

The DATA BASE for Advances in Information Systems - Spring 2005 (Vol. 36, No. 2) 17
Hypothesis Model 1 Model 2
H1: Perceived system quality predicts user sa- Supported (p ≤ 0.001) Supported (p ≤ 0.001)
tisfaction
H2: Perceived information quality predicts user Supported (p ≤ 0.05) Supported (p ≤ 0.05)
satisfaction
H3: Perceived system quality predicts actual use Supported (p ≤ 0.05) Supported (p ≤ 0.01)
H4: Perceived information quality predicts actual Not supported Not supported
use
H5a: User satisfaction predicts actual use Supported (p ≤ 0.10) N/A
H5b: Actual use predicts user satisfaction N/A Supported (p ≤ 0.10)
H6: User satisfaction predicts individual impact Supported (p ≤ 0.001) Supported (p ≤ 0.001)
H7: Actual use predicts individual impact Not supported Not supported

Table 3. Summary of the Support Obtained for the Hypotheses

At the aggregated level of the 15 departments, user Conclusions


satisfaction, use and individual impact had corre-
lations of 0.40, -0.36 and 0.41 with the overall The above paper reports on an empirical test of the IS
measure of organizational impact (none of them success model of DeLone and McLean (1992) as a
significant at the level p ≤ 0.10). Even though not predictive model. As pointed out in the Introduction,
significant in this small population, the correlations are there is a dearth of empirical tests of the model.
still notable, as user satisfaction, use and individual Overall, the present findings supported the model,
impact together explained 43.5% of the variance in but, as implied in the discussion in section 3.2, there
organizational impact (p ≤ 0.10). Of the individual is much ambiguity related to the DeLone –McLean
regression coefficients, the negative coefficient of model as a causal-explanatory model. Much of this
actual use with organizational impact (ß = -0.38) was culminates in the ambiguity of the concept of user
almost significant (p ≤ 0.10), suggesting that use may (information) satisfaction (e.g. Goodhue, 1986; Iivari,
be a problematic predictor of organizational impact. 1987; Melone, 1990; Iivari, 1997). It seems that little
One reason for this unexpected finding may be that progress has been made on this front since the
one should not analyse the relationship between 1980’s. Strengthening of the underlying theory of the
system use and organizational impact cross- DeLone-McLean model would require some attention
sectionally across departments. It may be that if one to be paid to this component. On the positive side, the
could analyse variations in system use within depart- findings suggest that user satisfaction may be a
ments, use would have a positive association with reasonably good surrogate for individual impact as
organizational impact. This would require more long as it is confined to impact on work performance.
longitudinal research. It is an open question, however, whether it is a valid
surrogate for organizational effectiveness, as
From a more practical viewpoint, the power of suggested by Ives et al. (1983).
perceived system quality and perceived information
quality as predictors of user satisfaction suggests that The present paper has its limitations. Since the
they provide an effective diagnostic framework in results are based on a field study of one mandatory
which to analyse system features that may “cause” information system in one specific organizational
user satisfaction and dissatisfaction. The close context, the first question is whether the findings are
association between user satisfaction and individual specific to that system and its organizational context.
impact also suggests that user satisfaction may serve A second question is whether they may be explained
as a valid surrogate for individual impact. by the nature of the system (a mandatory operational
level system based on an application package). As
pointed out above, we suspect that the relatively
insignificant role of actual use in the whole framework
may be explained by this mandatory nature of the

18 The DATA BASE for Advances in Information Systems - Spring 2005 (Vol. 36, No. 2)
system. This may also explain the fact that perceived Chin, W.W. (1998). “The Partial Least Squares
system quality emerged as more significant than Approach to Structural Equation Modelling,” in
perceived information quality. Empirical testing of the Marcoulides, G.A. (ed.), Modern Methods for
DeLone-McLean model should therefore be extended Business Research, Mahwah, NJ: Lawrence
to cover a wider variety of systems. Erlbaum Associated, pp. 295-336.
Chin, W.W. and Gopal, A. (1995). “Adoption
Intention in GSS: Relative Importance of
References Beliefs,” The Data Base for Advances in
Information Systems, Vol. 26, No. 2&3, pp. 42-
Adams, D.A., Nelson, R.R. and Todd, P.A. (1992). 63.
“Perceived Usefulness, Ease of Use, and Usage Chin, W.W. and Lee. M.K.O. (2000). “On the
of Information Technology: A Replication,” MIS Formation of End-User Computing Satisfaction:
Quarterly, Vol. 16, No. 2, pp. 227-247. A Proposed Model and Measurement
Amoroso, D.L. and Cheney, P.H. (1991). “Testing a Instrument,” in Orlikowski, W., Ang, S., Weill, P.,
Causal Model of End-User Application Effecti- Krcmar, H.C. and DeGross, J.I. (eds.),
veness,” Journal of Management Information Proceedings of the Twenty-First International
Systems, Vol. 8, No. 1, pp. 63-89. Conference on Information Systems, Brisbane,
Bagozzi, R.P. (1985). “Expectancy-Value Attitude Australia, pp. 553-563.
Models: An Analysis of Critical Theoretical Chin, W.W. and Marcolin, B.L., (2001). “The Future
Issues,” International Journal of Research of Diffusion Research,” The Data Base for
Marketing, Vol. 2, pp. 43-60. Advances in Information Systems, Vol. 32, No.
Bailey, J.E. and Pearson, S.W. (1983). “Developing 3, pp. 8-12.
a Tool for Measuring and Analyzing Computer Chin, W.W. and Newsted, P.R. (1999). “Structural
User Satisfaction,” Management Science, Vol. Equation Modeling Analysis with Small Samples
29, No. 5, pp. 530-545. Using Partial Least Squares,” in Rick Hoyle
Barki, H. and Hartwick, J. (1989). “Rethinking the (Ed.), Statistical Strategies for Small Sample Re-
Concept of User Involvement,” MIS Quarterly, search, Thousand Oaks, CA: Sage Publications,
Vol. 13, No. 1, pp. 53-63. pp. 307-341.
Barki, H. and Huff, S.L. (1985). “Change, Attitude to Davis, F.D. (1989). “Perceived Usefulness,
Change, and Decision Support System Perceived Ease of Use, and User Acceptance of
Success,” Information & Management, Vol. 9, Information Technology,” MIS Quarterly, Vol. 13,
No. 5, pp. 261-268. No. 3, pp. 319-339.
Baroudi, J.J. and Orlikowski, W.J. (1988). “A Short- Davis, F.D., Bagozzi, R.P. and Warshaw, P.R.
Form Measure of User Information Satisfaction: (1989). “User Acceptance of Computer
A Psychometric Evaluation and Notes on Use,” Technology: A Comparison of Two Theoretical
Journal of Management Information Systems, Models,” Management Science, Vol. 35, No. 8,
Vol. 4, No. 4, pp. 44-59. pp. 982-1003.
Baroudi, J.J., Olson, M.H. and Ives, B. (1986). “An Davis, F.D., Bagozzi, R.P. and Warshaw, P.R.
Empirical Study of the Impact of User (1992). “Extrinsic and Intrinsic Motivation to Use
Involvement on System Usage and Information Computers in the Workplace,” Journal of Applied
Satisfaction,” Communications of the ACM, Vol. Social Psychology, Vol. 22, pp. 1111-1132.
29, No. 3, pp. 232-238. DeLone, W.H. and McLean, E.R. (1992).
Cats-Baril, W.L. and Huber, G.P. (1987). “Decision “Information Systems Success: The Quest for
Support Systems for Ill-Structured Problems: An the Dependent Variable,” Information Systems
Empirical Study,” Decision Sciences, Vol. 18, Research, Vol. 3, No. 1, pp. 60-95.
No. 3, pp. 350-372. DeLone, W.H. and McLean, E.R. (2003). “The
Chau, P.Y.K. (1996). “An Empirical Assessment of a DeLone and McLean Model of Information
Modified Technology Acceptance Model,” Systems Success: A ten-Year Update,” Journal
Journal of Management Information Systems, of Management Information Systems, Vol. 19,
Vol. 13, No. 2, pp. 185-204. No. 4, pp. 9-30.
Chin, J.P., Diehl, V.A. and Norman, K.L. (1988). Doll, W.J. and Torkzadeh, G. (1988). “The
“Development of an Instrument Measuring User Measurement of End-User Computing
Satisfaction of the Human-Computer Interface,” Satisfaction,” MIS Quarterly, Vol 12, No. 2, pp.
in Soloway, E., Frye, D. and Sheppard, S.B. 259-274.
(eds.), CHI’88 Conference Proceedings: Human Etezadi-Amoli, J. and Farhoomand, A.F. (1996). “A
Factors in Computing Systems, NY: Association Structural Model of End-User Computing
for Computing Machinery, pp. 213-218.

The DATA BASE for Advances in Information Systems - Spring 2005 (Vol. 36, No. 2) 19
Satisfaction and User Performance,” Information Hair, J.F. Jr., Anderson, R.E., Tatham, R.L. and
& Management, Vol. 30, No. 2, pp. 65-73. Black, W.C. (1992). Multivariate Data Analysis
Fishbein, M. and Ajzen, I. (1975). Belief, Attitude, with Readings, New York, NY: Macmillan.
Intention and Behavior: An Introduction to Igabria, M. (1990). “End-User Computing
Theory and Research, Reading, MA: Addison- Effectiveness: A Structural Equation Model,”
Wesley. Omega, Vol. 18, No. 6, pp. 637-652.
Fornell, C. and Larcker, D.F. (1981). “Evaluating Igbaria, M., Guimaraes, T. and Davis, G.B. (1995).
Structural Equation Models with Unobservable “Testing the Determinants of Microcomputer
Variables and Measurement Error,” Journal of Usage via a Structural Equation Model,” Journal
marketing Research, Vol. 18, pp. 39-50. of Management Information Systems, Vol. 11,
(reprinted in Fornell, C. (ed.), A Second No. 4, pp. 87-114.
Generation of Multivariate Analysis, Volume 2, Igbaria, M. and Iivari, J. (1995). “The Effects of Self-
Measurement and Evaluation, New York, NY: Efficacy on Computer Usage,” Omega, Vol. 23,
Praeger Publishers, pp. 289-316.) No. 6, pp. 587-605.
Gatian, A.W. (1994). “Is User Satisfaction a Valid Igbaria, M., Parasuraman, S. and Baroudi, J.J.
Measure of System Effectiveness,” Information (1996). “A Motivational Model of Microcomputer
& Management, Vol. 26, No. 3, pp. 119-131. Usage,” Journal of Management Information
Gefen, D. and Keil, M. (1998). “The Impact of Systems, Vol. 13, No. 1, pp. 127-143.
Developer Responsiveness on Perceptions of Igbaria, M., Zinatelli, N., Cragg, P. and Cavaye,
Usefulness and Ease of Use: An Extension of A.L.M. (1997). “Personal Computing Acceptance
the Technology Acceptance Model,” The DATA Factors in Small Firms: A Structural Equation
BASE for Advances in Information Systems, Vol. Model,” MIS Quarterly, Vol. 21, No. 3, pp. 279-
29, No. 2, pp. 35-49. 305.
Gefen, D. and Straub, D.W. (1997). “Gender Igbaria, M. and Zviran, M. (1991). “End-User
Differences in the Perception and Use of E-Mail: Effectiveness: A Cross-Cultural Examination,”
An Extension to the Technology Acceptance Omega, Vol. 19, No. 5, pp. 369-379.
Model,” MIS Quarterly, Vol. 21, No. 4, pp. 389- Iivari, J. (1987). “User Information Satisfaction (UIS)
400. Reconsidered: An Information System as the
Gefen, D. and Straub, D.W. (2000). “The Relative Antecedent of UIS.” in DeGross, J.I. and Kriebel,
Importance of Perceived Ease of Use in IS C.H. (eds.), Proceedings of the Eighth Interna-
Adoption: A Study of E-Commerce Adoption,” tional Conference on Information Systems,
Journal of the Association for Information Pittsburgh PA, pp. 57-73.
Systems, Vol. 1, Article 8, pp. 1-28. Iivari, J. (1996). “Why Are CASE Tools Not Used?,”
Gefen, D., Straub, D.W. and Boudreau, M.-C. Communications of the ACM, Vol. 39, No. 19,
(2000). “Structural Equation Modelling and pp. 94-103.
Regression: Guidelines for Research Practice,” Iivari, J. (1997). “User Information Satisfaction: A
Communications of the Association for Critical Review,” Encyclopedia of Library and
Information Systems, Vol. 1, Article 7, pp. 1-76. Information Science, Vol. 60, Supplement 23,
Ginzberg, M.J. (1981). “Early Diagnosis of pp. 341-364.
Implementation Failure: Promising Results and Iivari, J. and Ervasti, I. (1994). “User Information
Unanswered Questions,” Management Science, Satisfaction: IS Implementability and
Vol. 27, No. 4, pp. 459-478. Effectiveness,” Information & Management, Vol.
Goodhue, D. (1986). “IS Attitudes: Toward 27, No. 4, pp. 205-220.
Theoretical and Definition Clarity,” in Maggi, L., Iivari, J. and Koskela, E. (1987). “The PIOCO Model
Zmud, R. and Wetherbe, J. (eds.), Procedings of for Information System Design,” MIS Quarterly,
the Seventh International Conference on Vol. 11, No. 3, pp. 401-419.
Information Systems, San Diego, CA, pp. 181- Ives, B., Olson, M.H. and Baroudi, I.J. (1983). “The
194. Measurement of User Information Satisfaction,”
Gustafsson, M.R., Karlsson, T. and Bubenko, J.A. Communications of the ACM, Vol. 26, No. 10,
Jr. (1982). “A Declarative Approach to pp. 785-793.
Conceptual Information Modeling,” in Olle, T.W., Jenkins, A.M. (1985). “Research Methodologies and
Sol, H.G. and Verrijn-Stuart, A.A. (eds.), MIS Research,” in Mumford E. et al. (eds.),
Information Systems Design Methodologies: A Research Methods in Information Systems,
Comparative Review, North-Holland, North-Holland, Amsterdam, pp. 103-117.
Amstredanm, pp. 93-142. Jenkins, A.M. and Ricketts, J.A. (1979).
Development of an Instrument to Measure User
Information Satisfaction with Management

20 The DATA BASE for Advances in Information Systems - Spring 2005 (Vol. 36, No. 2)
Information Systems, Working paper, Bloom- Seddon, P.B. (1997). “A Respecification and
ington, Indiana: Indiana University. Extension of the DeLone and McLean Model of
Karahanna, E. and Straub, D.W. (1999). The IS Success,” Information Systems Research,
Psychological Origins of Perceived Usefulness Vol. 8, No. 3, pp. 240-253.
and Ease-of-Use,” Information & Management, Seddon, P.B. and Kiew, M.-Y. (1994). “A Partial Test
Vol. 35, No. 4, pp. 237-250. and Development of the DeLone and McLean
Karahanna, E., Straub, D.W. and Chervany, N.L. Model of IS Success,” in DeGross, J.I., Huff,
(1999). “Information Technology Adoption S.L. and Munro, M.C. (eds.), Proceedings of the
Across Time: A Cross-Sectional Comparison of Fifteenth International Conference on
Pre-Adoption and Post-Adoption Beliefs,” MIS Information Systems, Vancouver, Canada, pp.
Quarterly, Vol. 23, No. 2, pp. 183-213. 99-110.
Lee, Y., Kozar, K.A. and Larsen K.R.T. (2003) “The Seddon, P.B., Staples, S., Patnayakuni, R. and
technology acceptance model: past, present, Bowtell, M. (1999). “Dimensions of Information
and future,” Communications of the Association Systems Success,” Communications of the
for Information Systems, Vol. 12, No. 50, pp. Association of Information Systems, Vol. 2,
752-780 Article. 20, pp. 1-39.
Leidner, D.E. (1998). “Mexican Executives’ Use of Snitkin, S.R. and King, W.R. (1986). “Determinants
Information Systems: An Empirical Investigation of the Effectiveness of Personal Decision
of EIS Use and Impact,” Journal of Global Support Systems,” Information & Management,
Information Technology Management, Vol. 1, Vol. 10, No. 2, pp. 83-89.
No. 2, pp. 19-36. Srinivasan, A. (1985). “Alternative Measures of
Ma, Q. and Liu, L. (2004). “The Technology System Effectiveness: Associations and
Acceptance Model: A Meta-Analysis of Empirical Implications,” MIS Quarterly, Vol. 9, No. 3, pp.
Findings,” Journal of Organizational and End- 243-253.
User Computing, Vol. 16, No. 1, pp. 59-72 Straub, D.W. (1989). “Validating Instruments in MIS
Mathieson, K. (1991). “Predicting User Intentions: Research,” MIS Quarterly, Vol. 13, No. 2, pp.
Comparing the Technology Acceptance Model 147-169.
with the Theory of Planned Behavior,” Straub, D.W., Keil, M. and Brenner, W. (1997).
Information Systems Research, Vol. 2, No. 3, “Testing the Technology Acceptance Model
pp. 173-191. Across Cultures: A Three Country Study,”
McGuire, W.J. (1969). “The Nature of Attitudes and Information & Management, Vol. 33, No. 1, pp.
Attitude Change,” in Lindzey, G. and Aronson, 1-11.
E. (eds.), The Handbook of Social Psychology, Straub, D., Limayem, M. and Karahanna-Evaristo, E.
Volume 3: The individual in social context, 2nd (1995). “Measuring System Usage: Implications
ed., Reading, MA: Addison-Wesley, pp. 136- for Theory Testing,” Management Science, Vol.
314. 41, No. 8, pp. 1328-1342.
Melone, N.P. (1990). “A Theoretical Assessment of Szajna, B. (1996). Empirical Evaluation of the
the User-Satisfaction Construct in Information Revised Technology Acceptance Model,”
Systems Research,” Management Science, Vol. Management Science, Vol. 42, No. 1, pp. 85-92.
36, No. 1, pp. 76-91. Torkzadeh, G. and Dwyer, D.J. (1994). “A Path
Mohr, L.B. (1982). Explaining Organizational Analytical Study of Determinants of Information
Behavior, the Limits and Possibilities of Theory System Usage,” Omega, Vol. 22, No. 4, pp. 339-
and Research, San Fransisco, CA: Jossey-Bass 348.
Publishers. Taylor, S. and Todd, P.A. (1995). “Understanding
Nelson, R.R. and Cheney, P.H. (1987). “Training Information Technology Usage: A Test of
End Users: An Exploratory Study,” MIS Competing Models,” Information Systems
Quarterly, Vol. 11, No. 4, pp. 547-559. Research, Vol. 6, No. 2, pp. 144-176.
Rai, A., Lang, S.S. and Welker, R.B. (2002). Van de Ven, A.H and Ferry, D.L. (1980). Measuring
“Assessing the Validity of IS Success Models: and Assessing Organizations, Chichester: John
An Empirical Test and Theoretical Analysis,” Wiley & Sons.
Information Systems Research, Vol. 13, No. 1, Venkatesh, V. and Davis, F.D. (2000). “A Theoretical
pp. 50-69. Extension of the Technology Acceptance Model:
Sanders, G.L. (1984). “MIS/DSS Success Measure,” Four Longitudinal Field Studies,” Management
Systems, Objectives, Solutions, Vol. 4, pp. 29- Science, Vol. 46, No. 2, pp. 186-204.
34.

The DATA BASE for Advances in Information Systems - Spring 2005 (Vol. 36, No. 2) 21
About the Author foundations, development methodologies and
approaches, organizational analysis, implementation
Juhani Iivari is a Professor in Information Systems and acceptance, and the quality of information
at the University of Oulu, Finland, and the Scientific systems. Iivari has published in journals such as
Head of the INFWEST.IT Postgraduate Education AJIS, BIT, Communications of the ACM, Data Base,
Program of five Finnish Universities in the area of European JIS, Information & Management, Informa-
information systems. He received his M.Sc. and tion and Software Technology, Information Systems,
Ph.D. degrees from the University of Oulu. Iivari ISJ, ISR, JMIS, JOCEC,MISQ, Omega, SJIS and
serves in editorial boards of seven journals. His others. Email: juhani.iivari@oulu.fi
research has broadly focused on theoretical

22 The DATA BASE for Advances in Information Systems - Spring 2005 (Vol. 36, No. 2)
Appendix A: Measures
System quality Complex __ __ __ __ __ __ __ Simple
Please assess the flexibility of the system to change Weak __ __ __ __ __ __ __ Powerful
in response to new demands
Difficult __ __ __ __ __ __ __ Easy
Rigid __ __ __ __ __ __ __ Flexible
Hard-to-use__ __ __ __ __ __ __ Easy-to-use
Limited __ __ __ __ __ __ __ Versatile
Insufficient__ __ __ __ __ __ __ Sufficient
Information quality
Low __ __ __ __ __ __ __ High
Please assess the volume of output information (re-
ports and queries)
Please assess the ability of the system to Concise __ __ __ __ __ __ __ Excessive
communicate with other information systems
Insufficient__ __ __ __ __ __ __ Sufficient
Incomplete __ __ __ __ __ __ __ Complete
Unnecessary__ __ __ __ __ __ __ Necessary
Insufficient__ __ __ __ __ __ __ Sufficient
Unreasonable__ __ __ __ __ __ __ Reasonable
Unsuccessful__ __ __ __ __ __ __ Successful
Please assess the completeness of the output
Bad __ __ __ __ __ __ __ Good information
Incomplete__ __ __ __ __ __ __ Complete
Please assess the response and turnaround time of Inconsistent__ __ __ __ __ __ __ Consistent
the system
Insufficient__ __ __ __ __ __ __ Sufficient
Slow __ __ __ __ __ __ __ Fast
Inadequate__ __ __ __ __ __ __ Adequate
Bad __ __ __ __ __ __ __ Good
Inconsistent__ __ __ __ __ __ __ Consistent
Please assess the precision of the output information
Unreasonable__ __ __ __ __ __ __ Reasonable
Insufficient __ __ __ __ __ __ __ Sufficient
Inconsistent__ __ __ __ __ __ __ Consistent
Please assess the ability of the system to recover
Low __ __ __ __ __ __ __ High
from errors
Uncertain__ __ __ __ __ __ __ Certain
Slow __ __ __ __ __ __ __ Fast
Inferior __ __ __ __ __ __ __ Superior
Please assess the accuracy of the output information
Incomplete__ __ __ __ __ __ __ Complete
Inaccurate __ __ __ __ __ __ __ Accurate
Complex __ __ __ __ __ __ __
Simple Low __ __ __ __ __ __ __ High
Inconsistent__ __ __ __ __ __ __ Consistent
Please assess the convenience of use of the system Insufficient __ __ __ __ __ __ __ Sufficient
Inconvenient __ __ __ __ __ __ __ Convenient
Bad __ __ __ __ __ __ __ Good Please assess the consistency of the output
information
Difficult __ __ __ __ __ __ __ Easy
Inconsistent__ __ __ __ __ __ __ Consistent
Inefficient__ __ __ __ __ __ __ Efficient
Low __ __ __ __ __ __ __ High
Inferior __ __ __ __ __ __ __ Superior
Please assess the commands used to interact with
the system Insufficient __ __ __ __ __ __ __ Sufficient

The DATA BASE for Advances in Information Systems - Spring 2005 (Vol. 36, No. 2) 23
Please assess the currency of the output information Frequency of use: How often on average do you use
the system?
Bad __ __ __ __ __ __ __ Good
Less than once a month 1
Untimely__ __ __ __ __ __ __ Timely
Once a month 2
Inadequate__ __ __ __ __ __ __ Adequate
A few times a month 3
Unreasonable __ __ __ __ __ __ __ Reasonable
A few times a week 4
Once a day 5
Please assess the format of the output
Several times a day 6
Bad __ __ __ __ __ __ __ Good
Complex__ __ __ __ __ __ __ Simple
Individual impact
Readable__ __ __ __ __ __ __ Readable
Useless __ __ __ __ __ __ __ Useful Using the system in my job enables me to accomplish
tasks more quickly.

Fully disagree__ __ __ __ __ __ __ Fully agree


User satisfaction
Please assess the system Using the system improves my job performance.

Terrible __ __ __ __ __ __ __ Wonderful Fully disagree__ __ __ __ __ __ __ Fully agree


Difficult __ __ __ __ __ __ __ Easy Using the system in my job increases my productivity.
Frustrating __ __ __ __ __ __ __ Satisfying
Fully disagree__ __ __ __ __ __ __ Fully agree
Inadequate __ __ __ __ __ __ __ Adequate
Using the system enhances my effectiveness in my
Dull __ __ __ __ __ __ __ Stimulating job
Rigid __ __ __ __ __ __ __ Flexible
Fully disagree__ __ __ __ __ __ __ Fully agree

Using the system makes it easier to do my job


Actual use
Fully disagree__ __ __ __ __ __ __ Fully agree
Daily use: How much time do you spend with the
system during an ordinary day when you use comput- I find the system useful in my job
ers?
Fully disagree__ __ __ __ __ __ __ Fully agree
Scarcely at all 1
Less than 1/2 hours 2
1/2- 1 hours 3
1-2 hours 4
2-3 hours 5
More than 3 hours 6

24 The DATA BASE for Advances in Information Systems - Spring 2005 (Vol. 36, No. 2)
Appendix B: Weights and loadings on the indicators

Indicator Model 1 Model 2


Weight Loading Weight Loading
Flexibility
- item 1 0.2651 0.8913 0.2651 0.8913
- item 2 0.2468 0.9108 0.2468 0.9108
- item 3 0.2839 0.9473 0.2839 0.9473
- item 4 0.3042 0.9276 0.3042 0.9276
System integration
- item 1 0.2666 0.9208 0.2666 0.9208
- item 2 0.2730 0.8956 0.2730 0.8956
- item 3 0.2778 0.8851 0.2778 0.8851
- item 4 0.2924 0.9356 0.2924 0.9356
Response time
- item 1 0.2605 0.8308 0.2605 0.8308
- item 2 0.2716 0.8588 0.2716 0.8588
- item 3 0.3134 0.8887 0.3134 0.8887
- item 4 0.3118 0.9194 0.3118 0.9194
Recoverability
- item 1 0.2761 0.8757 0.2761 0.8757
- item 2 0.2800 0.9366 0.2800 0.9366
- item 3 0.2809 0.9236 0.2809 0.9236
- item 4 0.3858 0.8815 0.3858 0.8815
Convenience
- item 1 0.2992 0.9352 0.2992 0.9352
- item 2 0.3173 0.9489 0.3173 0.9489
- item 3 0.2782 0.9012 0.2782 0.9012
- item 4 0.2306 0.7590 0.2306 0.7590
Command language
- item 1 0.1303 0.6100 0.1303 0.6100
- item 2 0.3190 0.9027 0.3190 0.9027
- item 3 0.3612 0.9509 0.3612 0.9509
- item 4 0.3478 0.9123 0.3478 0.9123
Completeness
- item 1 0.2556 0.9408 0.2556 0.9408
- item 2 0.2652 0.9310 0.2652 0.9310
- item 3 0.2749 0.9589 0.2749 0.9589
- item 4 0.2671 0.9436 0.2671 0.9436
Precision
- item 1 0.2465 0.9385 0.2465 0.9385
- item 2 0.2687 0.9623 0.2687 0.9623
- item 3 0.2689 0.9596 0.2689 0.9596
- item 4 0.2716 0.9522 0.2716 0.9522
Accuracy
- item 1 0.2662 0.9596 0.2662 0.9596
- item 2 0.2595 0.9533 0.2595 0.9533

The DATA BASE for Advances in Information Systems - Spring 2005 (Vol. 36, No. 2) 25
- item 3 0.2729 0.9300 0.2729 0.9300
- item 4 0.2710 0.9231 0.2710 0.9231
Consistency
- item 1 0.2601 0.9277 0.2601 0.9277
- item 2 0.2621 0.9630 0.2621 0.9630
- item 3 0.2620 0.9505 0.2620 0.9505
- item 4 0.2710 0.9585 0.2710 0.9585
Currency
- item 1 0.2606 0.9590 0.2606 0.9590
- item 2 0.2567 0.9675 0.2567 0.9675
- item 3 0.2549 0.9567 0.2549 0.9567
- item 4 0.2692 0.9644 0.2692 0.9644
Format
- item 1 0.2726 0.9236 0.2726 0.9236
- item 2 0.2838 0.9361 0.2838 0.9361
- item 3 0.3020 0.9648 0.3020 0.9648
- item 4 0.2510 0.8004 0.2510 0.8004
Perceived system quality
- item 1 0.0586 0.6701 0.0587 0.6702
- item 2 0.0487 0.6238 0.0487 0.6239
- item 3 0.0551 0.7177 0.0552 0.7177
- item 4 0.0608 0.7688 0.0608 0.7689
- item 5 0.0530 0.6915 0.0530 0.6916
- item 6 0.0564 0.7082 0.0565 0.7083
- item 7 0.0586 0.7206 0.0586 0.7207
- item 8 0.0619 0.7586 0.0619 0.7586
- item 9 0.0485 0.6846 0.0485 0.6846
- item 10 0.0536 0.7138 0.0537 0.7138
- item 11 0.0660 0.8238 0.0660 0.8237
- item 12 0.0668 0.8196 0.0668 0.8195
- item 13 0.0590 0.7930 0.0589 0.7930
- item 14 0.0635 0.8042 0.0635 0.8042
- item 15 0.0642 0.8066 0.0642 0.8067
- item 16 0.0698 0.8208 0.0697 0.8207
- item 17 0.0699 0.8041 0.0698 0.8040
- item 18 0.0720 0.8525 0.0719 0.8524
- item 19 0.0672 0.7475 0.0671 0.7475
- item 20 0.0559 0.6197 0.0560 0.6197
- item 21 0.0220 0.2841 0.0220 0.2841
- item 22 0.0481 0.6957 0.0480 0.6956
- item 23 0.0604 0.7877 0.0603 0.7876
- item 24 0.0595 0.7584 0.0595 0.7583
Perceived information quality
- item 1 0.0502 0.7418 0.0502 0.7419
- item 2 0.0489 0.7697 0.0490 0.7698
- item 3 0.0502 0.7977 0.0502 0.7978
- item 4 0.0513 0.7752 0.0514 0.7753

26 The DATA BASE for Advances in Information Systems - Spring 2005 (Vol. 36, No. 2)
- item 5 0.0502 0.7548 0.0502 0.7548
- item 6 0.0563 0.8226 0.0563 0.8226
- item 7 0.0561 0.8234 0.0561 0.8233
- item 8 0.0558 0.8317 0.0558 0.8316
- item 9 0.0543 0.8303 0.0542 0.8302
- item 10 0.0527 0.8096 0.0527 0.8095
- item 11 0.0584 0.8514 0.0584 0.8513
- item 12 0.0540 0.8453 0.0540 0.8453
- item 13 0.0559 0.8346 0.0559 0.8345
- item 14 0.0577 0.8408 0.0577 0.8407
- item 15 0.0578 0.8406 0.0578 0.8405
- item 16 0.0581 0.8694 0.0581 0.8693
- item 17 0.0565 0.8142 0.0566 0.8143
- item 18 0.0560 0.8021 0.0560 0.8022
- item 19 0.0560 0.7964 0.0561 0.7965
- item 20 0.0584 0.8410 0.0584 0.8410
- item 21 0.0450 0.6233 0.0450 0.6232
- item 22 0.0467 0.6488 0.0467 0.6488
- item 23 0.0491 0.6905 0.0491 0.6905
- item 24 0.0408 0.5739 0.0408 0.5739
User information satisfaction
- item 1 0.2162 0.7257 0.2099 0.7219
- item 2 0.1982 0.7409 0.1937 0.7384
- item 3 0.2346 0.8521 0.2386 0.8542
- item 4 0.2226 0.8100 0.2294 0.8131
- item 5 0.2075 0.8287 0.2084 0.8296
- item 6 0.1943 0.7720 0.1921 0.7720
Use
- item 1 0.5897 0.9349 0.6080 0.9400
- item 2 0.4951 0.9063 0.4761 0.9001
Individual impact
- item 1 0.1362 0.8517 0.1358 0.8515
- item 2 0.1814 0.9088 0.1814 0.9088
- item 3 0.1968 0.8976 0.1967 0.8975
- item 4 0.2161 0.9213 0.2162 0.9213
- item 5 0.2098 0.9169 0.2100 0.9170
- item 6 0.1901 0.7948 0.1903 0.7949

The DATA BASE for Advances in Information Systems - Spring 2005 (Vol. 36, No. 2) 27

View publication stats

You might also like