Towards A Multidimensional Model For Evaluating Electronic Government

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

Government Information Quarterly 29 (2012) 324–334

Contents lists available at SciVerse ScienceDirect

Government Information Quarterly


journal homepage: www.elsevier.com/locate/govinf

Towards a multidimensional model for evaluating electronic government: Proposing


a more comprehensive and integrative perspective
Luis Felipe Luna-Reyes a,⁎, J. Ramon Gil-Garcia b, 1, Georgina Romero c, 2
a
Universidad de las Americas Puebla, School of Business and Economics, NE-221 H, Sta. Catarina Martir, Cholula, Puebla, 72820, Mexico
b
Centro de Investigacion y Docencia Economicas (CIDE), Carretera México-Toluca No. 3655, Col. Lomas de Santa Fe, C.P. 01210, México, D. F., Mexico
c
Universidad de las Americas Puebla, School of Engineering, Sta. Catarina Martir, Cholula, Puebla, 72820, Mexico

a r t i c l e i n f o a b s t r a c t

Available online 11 May 2012 The use of information and communication technologies has been a key strategy for government reform. It
offers diverse benefits, ranging from efficiency and effectiveness to transparency and greater democratic
Keywords: participation. Governments in many parts of the world have invested vast resources into electronic government
Evaluation projects with the expectation of achieving these and other outcomes. However, the results in many cases are
Measurement limited and there is no comprehensive way to evaluate these initiatives at the aggregate level. A method for
Electronic government
measuring and evaluating electronic government that identifies its advances and problems is needed. Previous
efforts to do so are limited in terms of scope and dimensions being considered. Based on a review of current
literature and the analysis of international best practices, this paper proposes a multidimensional model for
measuring and evaluating electronic government. It also includes examples, a proposal of how to operatio-
nalize it, and several recommendations for practical use.
© 2012 Elsevier Inc. All rights reserved.

1. Introduction Electronic government involves a complex web of relationships


between technological, organizational, institutional, and contextual
The use of information and communication technologies (ICT) variables. In order to capture this complexity, an e-government eval-
has made significant inroads into diverse aspects of social life over uation model should measure and assess not only the final outcomes,
the last few years. The application of ICT to government – or electronic results or benefits, but also the technological characteristics of the
government – has been considered an important strategy for govern- systems, and the existing conditions in terms of organizational forms,
ment administrative reform. Electronic government has the potential institutional arrangements, and contextual variables. This design
to transform fundamental relationships between government, citizens, would allow evaluators to understand how the results are produced
businesses, and other stakeholders. Nevertheless, we still know little and, consequently, to understand the importance of different actions
about the impacts and results associated with electronic government in enabling or limiting e-government efforts. In addition, this model
projects or their capacity to produce real organizational transformation. allows understanding of how, for example, an individual agency effort
Although efforts are being made to evaluate different dimensions of contributes to the overall e-government score. A model like this would
electronic government (Accenture, 2004; UNPAN, 2008; West, 2008) be useful not only to compare between countries or between regions
and some conceptual models exist (Esteves & Joseph, 2008; Gupta & within a country, but also to guide governments in making decisions
Jana, 2003; Karunasena & Deng, 2012; RICYT-CYTED, UMIC, & ISCTE, about their current and future e-government initiatives.
2006; Stowers, 2004; Verdegem & Verleye, 2009), at present there is The purpose of this article is to analyze the current state of the art
no methodology that allows for flexible and comparative measurement of how electronic government is measured and propose guidelines
of the phenomenon of electronic government in a comprehensive to develop a “Multi-Dimensional Model for Measuring Electronic
and integral way. Most previous high-level models assess only the Government.” This model seeks to incorporate the different ap-
aggregate level, but do not propose ways to clearly link those overarching proaches currently used to measure electronic government and the
results with those found in single agencies or single projects. underpinnings in the literature related to this phenomenon in a balanced
and integrated fashion. The article also provides some suggestions of
how to operationalize the model and how to use it in practice. This
⁎ Corresponding author. Fax: + 52 222 229 2726. article is divided into six sections including the foregoing introduc-
E-mail addresses: luisf.luna@udlap.mx (L.F. Luna-Reyes), joseramon.gil@cide.edu tion. In the second section we begin by defining the phenomenon to
(J.R. Gil-Garcia), georgina.rm@gmail.com (G. Romero).
URL: http://www.cide.edu (J.R. Gil-Garcia).
be measured and some of the conceptualizations regarding evaluation
1
Tel.: + 52 55 5727 9800x2311. in the literature. The third section describes the sources and method
2
Tel.: + 52 222 229 2000; fax: + 52 222 229 2726. used to develop the model. The fourth and fifth sections describe the

0740-624X/$ – see front matter © 2012 Elsevier Inc. All rights reserved.
doi:10.1016/j.giq.2012.03.001
L.F. Luna-Reyes et al. / Government Information Quarterly 29 (2012) 324–334 325

model and the way in which it can be used. The last section includes initiative (Stowers, 2004); and (4) Periodic (comparability) — its main
some final comments and suggestions for future research. objective is not a particular country or initiative, but a collection of
countries, states, or localities considered to be comparable. These
2. Electronic government and evaluation evaluations usually include certain basic aspects of electronic gov-
ernment or government portals (Gant, Gant, & Johnson, 2002; Sandoval
The first step toward developing an evaluation model is to un- & Gil-García, 2005; UNPAN, 2008; West, 2008).
derstand the phenomenon to be evaluated. Unfortunately, there Lastly, it is important to point out that measuring and evaluating
is no consensus about a definition of electronic government. This electronic government are complex and multidimensional tasks. Its
first section proposes a conceptualization of electronic government complexity is derived not only from the fact that the phenomenon
and its evaluation, only as a starting point and as a way to provide is itself complex, but also from the reality that many actors are involved,
basic concepts. The actual literature review conducted as the basis quite often with differing or conflicting points of view. Furthermore, an
for the model is presented jointly with the best practices results in electronic government initiative can be measured or evaluated in many
the analysis section of this article. different ways (as an information system, against initial objectives, in
terms of efficiency, etc.), considering different phases (beginning,
2.1. Defining electronic government implementation, production, etc.), and with different objectives
(comparison, detection of needs, supporting decisions, understand-
There are currently a large number of definitions of electronic gov- ing a phenomenon, etc.). Therefore, any evaluation model sacrifices
ernment. For the purposes of this paper, we propose that “electronic some of its usefulness in order to meet a particular objective in ex-
government is the selection, implementation, and use of information change for becoming a better model for another objective.
and communication technologies in government to provide public The challenge taken up by this article is to have the first approxi-
services, improve managerial effectiveness, and promote democratic mation of a measurement model that can be adapted to satisfy more
values and mechanisms, as well as the development of a legal and than one of these objectives. Initially this model is proposed as a high-
regulatory framework that facilitates information-intensive initia- level model to assess e-government at the aggregate level (region,
tives and fosters the knowledge society” (Gil-Garcia & Luna-Reyes, country, province, municipality, etc.). The purpose in this case could
2006, p. 639). The definition includes four areas of application for be similar to previous high-level efforts and concentrate on compari-
electronic government. The first lies in the area of public services sons between governments in their entirety. However, this model
through ICT or “e-services”. The second concerns the use of information also attempts to help public managers to understand how different
and communication technologies to improve and innovate in govern- ministries, departments, and other sub-units are contributing to the
ment operations, internal efficiency, and efforts directed at government general efforts and what they need to do in order to improve their
reform and administration or “e-management”. The third involves the overall performance.
use of ICT to promote citizen participation in its many manifestations
and encourage democratic relationships between government, citizens, 3. Research design and methods
and other social actors or “e-democracy”. Finally, the fourth refers to the
creation of a legal and regulatory framework that facilitates electronic For the purpose of developing the model proposed in this article,
government initiatives and fosters an atmosphere conducive to the we followed a two-stage approach and used two complementary
information society or “e-public policy”. sources of information. The approach we followed is consistent with
Theory-Based Evaluation (TBE), in which the objective is to build
2.2. Evaluating electronic government and test general causal mechanisms associated with the outcomes
of a program or project (Birckmayer & Weiss, 2000; Weiss, 1997).
In addition to establishing our conceptualization of electronic Our two main sources of information were (1) a fairly comprehensive
government, it is important to discuss our understanding of the review of recent academic literature and (2) a careful analysis of
evaluation process. The objective of the evaluation is to “measure other existing frameworks for evaluating electronic government.
the effects of a program by comparing it to its proposed outcomes The following paragraphs include a description of the two stages fol-
so as to contribute to subsequent decision making concerning the lowed in order to build the model.
program and to improve future programming” (Weiss, 1991). There The first stage consisted of a literature review to understand the
are several general models that fall within this paradigm, the most underlying theory regarding electronic government value creation.
common of which is the CIPP model by Stufflebeam (2003), which The literature review consisted of a systematic review of ten of the
has four main components that can be included in evaluation projects: most important academic journals in the fields of public administration
(1) context, (2) inputs, (3) processes, and (4) products. In the specific and public policy over a seven-year period (1999–2005) (Forrester &
case of electronic government, examples of evaluation are relatively Watson, 1994). The journals on public administration were Public
scarce in the literature, although the concern about evaluation has Administration Review, Journal of Public Administration Research and
increased in the last few years. Theory, American Review of Public Administration, Administration
One way of classifying measurement and evaluation is according and Society, and Public Performance and Management Review. In rela-
to the stage at which they are undertaken: (1) Beginning of project — tion to public policy, the academic journals consulted were Journal of
measurement and evaluation of electronic government can, and in gen- Policy Analysis and Management, Journal of Public Policy, Policy Sciences,
eral, must commence prior to the project officially beginning (analysis Policy Studies Journal, and Policy Studies Review. From this review we
of requirements and design or quality driven approaches, for example) found 73 articles on topics related to the use of information and com-
(Andersen, Belardo, & Dawes, 1994; Batini, Viscusi, & Cherubini, 2009; munication technologies in government. The review was complemented
Cresswell, 2004; Dawes et al., 2004); (2) End of project — the project with a selection of books, research reports, and articles from academic
must be evaluated taking into account its impacts or results. These are journals on business, management information systems, information
normally specific to each initiative and context, but are generally relat- sciences, and strategic management, among others.
ed to the objectives of public sector reform and the creation of public When reading each article, we systematically looked for relevant
value (Gouscos, Kalikakis, Legal, & Papadopoulou, 2007; Heeks, 2005; concepts and theories about e-government results in public adminis-
Kum, Duncan, & Stewart, 2009; Mitra & Gupta, 2008); (3) Processes tration (outputs, outcomes, benefits, etc.). Through the review we
(monitoring progress) — one project management activity is to monitor found that a generic theory for value creation in electronic government
progress of the actions needed to complete an electronic government could be D → C → R (determinants, characteristics, and results). In other
326 L.F. Luna-Reyes et al. / Government Information Quarterly 29 (2012) 324–334

words, the underlying assumption in the e-government literature is differentiating, for example, between applications that only provide
that some characteristics of quality e-government applications (such information and those that serve to carry out application processes
as personalization or usability) depend on a series of determinants, or government services associated with health, education, and other
like institutional and organizational arrangements. In turn, high quality important policy areas. Keeping in mind our definition of electronic
applications will create expected results in terms of adoption, efficiency, government and the literature review, the characteristics of electronic
or increased public participation, among others. government can be grouped into e-services (Cook & LaVigne, 2002;
The second stage consisted of a search for evaluation frameworks Gil-Garcia & Luna-Reyes, 2006; Hiller & Bélanger, 2001; Holmes, 2001;
and current practices, both in the academic literature and from the Moon, 2002; OECD, 2003; UNPAN, 2008), e-management (Cook &
internet. We used expert knowledge from studies on electronic gov- LaVigne, 2002; Gil-Garcia & Luna-Reyes, 2006; Hiller & Bélanger, 2001;
ernment evaluation, complemented by web searches and searches in Holmes, 2001; Moon, 2002; OECD, 2003; UNPAN, 2008), e-democracy
academic journals. This process resulted in 36 evaluation models or (Cook & LaVigne, 2002; Gil-Garcia & Luna-Reyes, 2006; Hiller & Bélanger,
frameworks, from which a wide variety of approaches, variables, 2001; OECD, 2003; UNPAN, 2008), and e-policy (6, 2001; Gil-Garcia &
and indicators were identified. We incorporated into the model some Luna-Reyes, 2006).
of the main variables and indicators related to each of the categories The results represent the benefits that have been identified as ef-
that we identified during the first stage of the process. Following, we fects of electronic government. They provide a simplified indication
present the main results of both reviews as an integrative and compre- of the impact of electronic government and the value created by
hensive model for evaluating electronic government. these initiatives. This value could be measured by assessing multiple
results. Among the main electronic government results identified in
4. Building a multi-dimensional model to comprehensively measure the literature are the following: improvements in the quality of public
electronic government services (Brown & Brudney, 2004; Dawes & Prefontaine, 2003; Gant et
al., 2002; OECD, 2003; West, 2004; Kaisara & Pather, 2011), efficiency
This section of the document introduces a measurement frame- and productivity in processes and government operations (Brown,
work that emerged from our review of the literature on electronic 2001; Esteves & Joseph, 2008; Klein, 2000; Lee & Perry, 2002; Mitra &
government and current evaluation frameworks. The section ends Gupta, 2008; OECD, 2003), more effective programs and policies (6,
by matching the conceptual framework with operational measures 2001; Brown & Brudney, 2003; Dawes, 1996; Kellogg & Mathur, 2003;
found in practice. Kum et al., 2009; Landsbergen & Wolken, 2001; OECD, 2003), transpar-
ency and accountability (Gil-Garcia & Luna-Reyes, 2006; OECD, 2003;
4.1. Literature on electronic government Rocheleau, 2003; Welch, Hinnant, & Moon, 2005; Welch & Wong,
2001), citizen participation (Fountain, 2003; Kellogg & Mathur, 2003;
As a result of the literature review, there were three main themes, La Porte, Demchak, & Friis, 2001; West, 2004), a regulatory framework
which constitute the general underlying theory for the development that supports electronic government (Andersen & Dawes, 1991; Dawes
of a model for measuring and assessing e-government efforts (see & Nelson, 1995; Gil-García, 2004), a legal and regulatory framework
Fig. 1): (1) What is electronic government and what are its primary that encourages the information society (Gil-Garcia & Luna-Reyes,
CHARACTERISTICS? (2) What are the benefits or RESULTS of electron- 2006; Helbig, Gil-García, & Ferro, 2005; Rogers & Kingsley, 2004), and
ic government? (3) What are the main DETERMINANTS or factors transformation of government structures (Fountain, 2001; Garson,
influencing the success of electronic government? 2004; Heintze & Bretschneider, 2000; Kraemer, King, Dunkle, & Lane,
The characteristics of electronic government represent levels of 1989). All these results are sources of value for citizens, businesses, and
functionality and technical aspects of electronic government systems other stakeholders. Including end users and citizens in the measurement
and applications. These characteristics provide a way of measuring and assessment of these variables is an important component of any
the success of initiatives in terms of how they meet technical require- evaluation process. Several techniques, such as surveys, focus groups,
ments such as usability, quality of information, privacy, or security. In and interviews, could be used to collect data about their perceptions
addition, they reflect the level of sophistication of these systems, of the general value created, as well as some specific results.

Fig. 1. Theoretical — conceptual model for comprehensive measuring of electronic government.


L.F. Luna-Reyes et al. / Government Information Quarterly 29 (2012) 324–334 327

Understanding the characteristics and results of electronic gov- Table 1


ernment is a necessary step toward deciding what is to be measured, Reports and documentation included in the study.

but it is insufficient; a better understanding of the relevant factors or Type of document Name/Description
determinants of the results and characteristics could improve the
Conceptual framework Lisbon Manual [LISBOA] (RICYT-CYTED, et al., 2006)
evaluations of electronic government. The main determinants of Performance in e-Government by the Center for the
electronic government identified in the literature include the quality Business of Government [STOWERS] (Stowers, 2004)
of existing information and data (Ambite et al., 2002; Dawes, 1996; Electronic Government Evaluation Framework [GUPTAJ]
(Gupta & Jana, 2003)
Dawes & Pardo, 2002; Dawes, Pardo, & Cresswell, 2004; Kaplan,
Critical Factors for Evaluating Public Value [PUBLIC]
Krishnan, Padman, & Peters, 1998; Redman, 1998), adequate techno- (Karunasena & Deng, 2012)
logical infrastructure and compatibility (Brown, 2000; Caffrey, 1998; Comparative Reports Brown University Global e-Government Report [BROWN]
Davis, 1989; Dawes, 1996; Dawes & Pardo, 2002; Mahler & Regan, (West, 2008)
2003), organizational and management characteristics (Barki, Rivard, United Nations Electronic Government Report [UNPAN]
(UNPAN, 2008)
& Talbot, 1993; Barrett & Greene, 2000; Caffrey, 1998; Dawes &
Eurostat [EUROSAT] (Eurostat, 2005; Reis, 2005)
Nelson, 1995; Dawes & Pardo, 2002; Edmiston, 2003; Jiang & Klein, European Electronic Government Report by the Observatory
2000; Rocheleau, 2003), legal and institutional frameworks (Bajjaly, for Interoperable Delivery of e-Government Services to
1999; Bellamy, 2000; Brown & Brudney, 2003; Caffrey, 1998; Dawes & Public Administrations, Businesses and Citizens [IDABC]
(Chevallerau, 2005)
Nelson, 1995; Dawes & Pardo, 2002; Fountain, 2001; Harris, 2000;
Information Society Indicators [ISI] (CELA-IESE,, & DMR
Landsbergen & Wolken, 2001), and the political, economic, and social Consulting, 2005)
contexts (Bellamy, 2000; Dawes & Pardo, 2002; Fountain, 2001; Gil- North American Customer Satisfaction Index [ACS] (ACSI,
Garcia, 2005; Kraemer et al., 1989; La Porte, Demchak, & de Jong, 1994)
2002; Rocheleau, 2003; Thomas & Streib, 2003; Welch et al., 2005; Digital States [ESTDIG] (Lassman, 2001)
Electronic Government Leadership [ACCENT] (Accenture,
West, 2004). Accordingly, the literature review suggests that a concep-
2004)
tual model for evaluating electronic government would not only in- e-readiness Index [MINGUES] (Minges, 2005)
clude its main characteristics, but also its results and determinants Key ICT Indicators [PARTNER] (ONU, 2005)
(see Fig. 1). Models from Academic Performance Based Electronic Government [DESEM]
Literature (DeMaio et al., 2002)
Benchmarking Government Services [KAYLOR] (Kaylor,
4.2. Evaluation models Deshazo, & Van Eck, 2001)
Multidimensional evaluation of government portals
In order to identify current indicators and practices for evaluating [GANT] (Gant et al., 2002)
electronic government, we reviewed existing models used by interna- Automatic diagnostic for analyzing government applications
[CHOUD] (Choudrie, Ghinea, & Weerakkody, 2004)
tional agencies, private businesses, and academics. This section reviews
Benchmarking Web Sites [MISJOHN] (Misic & Johnson, 1999)
the main methods, models, or indices that have been used to measure GovQual [GOVQUAL] (Batini et al., 2009)
and evaluate electronic government. The thirty-six documents that Measuring User Satisfaction [USAT] (Verdegem & Verleye,
were identified belong to three different categories. The first contains 2009)
Website Usability Benchmarks [USABILITY] (Baker, 2009)
reports with comparative indices on electronic government. The second
One-Stop Services Performance [PERFORMANCE]
includes models and reports found in academic journals. Finally, the (Gouscos et al., 2007)
third constitutes conceptual evaluation frameworks. Some of the Information Systems Success [WANG] (Wang & Liao, 2008)
studies included in this section do not directly analyze the phenome- Ex-Post Assessment e-Government Projects [EXPOST]
non of electronic government, but are included because they considered (Esteves & Joseph, 2008)
Evaluating IT Innovations [ITINNOV] (Raus, Liu, & Kipp, 2010)
some of the determinants or results of electronic government. The
Systemic Evaluation of Public Participation [EVPART]
reports included in this section show the diversity of the approaches (Gelders, Brans, Maesschalck, & Colsoul, 2010)
that have been used to measure and evaluate electronic government e-Government Evaluation Challenge [EVCHALLENGE]
or the dimensions associated with it (see Table 1). (Kaisara & Pather, 2011)
Self-evaluation in Local Government [SELF] (Kum et al., 2009)
The reports analyzed in this section differ in terms of the unit of
Contextual Perspective of Performance Assessment
analysis and the main approach followed in each. Regarding the [CONTEXT] (Mitra & Gupta, 2008)
level of analysis, thirteen of the reports consider the evaluation pro- Managing WWW in Public Administration [WWW]
cess to be directed at the country level, two at the state level, three (Huang & Chao, 2001)
at the local level, thirteen at the individual agency or project level, Accessibility of Government Home Pages [ACCESS]
(Olalere & Lazar, 2011)
and five constitute generic frameworks for evaluating performance
Success Indicators for Pre-Implementation of Projects
that can be applied to any of the previously mentioned levels by [PREIMP] (Sharifi & Manian, 2010)
using different aggregation levels. In terms of approach, the reports e-Government Maturity Model [MATURITY] (Valdés
analyze system functionality, characteristics of services, quality of et al., 2011)
services, ICT indicators, technical characteristics, project impacts, or Evaluating Municipal Websites [TALOUD] (van den Haak,
de Jong, & Schellens, 2009)
a combination of several of these different elements. e-Government Heuristics [HEURISTICS] (Donker-Kuijer,
The reports that focus on the different functionalities of electronic de Jong, & Lentz, 2010)
government projects include aspects such as the amount of information
and services offered, privacy and security, accessibility, usability, and
transparency. Three reports employ automatic evaluation of internet
portals in order to evaluate accessibility based on the recommendations
of the World Wide Web Consortium. Those that concentrate purely on ICT used by individuals and businesses. It is important to note that
services use a list of standard services and build a rubric grading the ser- only a few of these reports include the citizen or end user point of
vices selected on a scale that begins at only providing information about view, which we believe it is an important component of each evaluation
the service to full online delivery. The reports that focus on quality of process.
services use citizen expectations and satisfaction focus groups or ques- Reports combining more than one aspect of e-government follow
tionnaires. The reports focusing on ICT indicators (or on the information different approaches. For example, the [UNPAN] report gathers ICT in-
society) consistently use the indicators related to infrastructure and dicators with functionality indicators in electronic government. The
328 L.F. Luna-Reyes et al. / Government Information Quarterly 29 (2012) 324–334

Table 2
Map of the measurement framework with electronic government characteristics, determinants, and results found in the literature.

Determinants Characteristics Results

Quality of the information and existing data to Quality of information available on web sites and in systems Statistics on systems usage [DESEM] [STOWERS]
feed the systems [GOVQUAL] [MATURITY] [BROWN] [ESTDIG] [STOWERS] [KAYLOR] [GANT] [CHOUD]
[MISJOHN] [USABILITY] [WANG] [WWW] [TALOUD] [HEURISTICS]
Technological infrastructure and compatibility Services [BROWN] [UNPAN] [EUROSTAT] [IDABC] [ESTDIG] Quality of public services [ACS] [STOWERS] [USAT]
[IDABC] [LISBOA] [MATURITY] [ACCENT] [KAYLOR] [LISBOA] [WANG] [HEURISTICS] [PERFORMANCE] [WANG] [ITINNOV] [EVCHALLENGE]
[CONTEXT] [PUBLIC]
Organizational and management characteristics Interaction [BROWN][UNPAN] [MISJOHN] Efficiency and productivity [DESEM] [STOWERS]
[IDABC] [ESTDIG] [DESEM] [STOWERS] [EVPART] [PERFORMANCE] [EXPOST] [ITINNOV] [CONTEXT] [PUBLIC]
[SELF] [PREIMP] [MATURITY]
Existing legal and institutional framework [IDABC] Integration [ACCENT] Effectiveness of programs and policies [DESEM]
[EVPART] [ACCESS] [MATURITY] [PERFORMANCE] [ITINNOV] [SELF]
Potential demand [UNPAN] [EUROSTAT] [IDABC] Personalization [ACCENT] [GANT] Transparency and accountability [STOWERS] [EVPART]
[ISI] [MINGUES] [PARTNER] [LISBOA] [EVCHALLENGE]
Security [BROWN] Citizen participation [ESTDIG] [EVPART]
Privacy [BROWN] Changes in the regulatory framework [ACCESS]
Accessibility [BROWN] [STOWERS] [GANT] [ACCESS] [HEURISTICS]
Usability and usefulness [ACS] [STOWERS] [GANT] [MISJOHN]
[USABILITY] [WWW] [TALOUD] [HEURISTICS]

[EUROSAT] and [ACCENT] reports reconcile service indicators with government. In the variables related to results, it is highly recom-
quality of service indicators. The [DESEM] and [STOWERS] reports mended to include measures associated with value for the end user
use a more comprehensive approach, which combines different indica- or the citizen and other stakeholders.
tors for input, infrastructure, results, and activity. Models like those There are alternative ways to organize relevant variables for
presented in [PERFORMANCE], [ITINNOV] and [CONTEXT] include measuring and evaluating electronic government processes, such as
different sets of performance indicators according to the goals of dif- the one proposed by Stowers or in the Lisbon Manual. What we
ferent stakeholders. Other reports, like [USABILITY], discuss limitations present in this paper is simply a way of carrying out the evaluation
related to current measurement systems in benchmark reports. Re- that may be deemed useful for creating an instrument to better under-
ports such as [HEURISTICS] and [TALOUD] concentrate specifically on stand the phenomenon of e-government. One of the strengths of this
usability studies. Finally, reports such as the [IDABC] and [GUPTAJ] model for the comprehensive evaluation of electronic government is
concentrate on offering a description that mixes quantitative and that it stems from a broad characterization of this phenomenon and
qualitative data to obtain a more complete view of the development includes a wide variety of variables that capture an array of relevant
of electronic government in the national context in which they are aspects described through 21 variables arranged in three dimensions
developed. The Lisbon Manual [LISBOA] proposes measuring a se- that capture what could be thought of as the basic underlying reasoning
ries of variables associated with the Information and Knowledge in developing e-government projects: determinants, characteristics,
Society. and results of electronic government (D → C → R). Therefore, our
Finally, the measurement frameworks analyzed in this section also proposal puts forward that in order to be able to measure or
differ regarding the variables used in their analysis. Table 2 arranges evaluate electronic government integrally and comprehensively,
the measurement frameworks in terms of the primary variables found it would be necessary to select indicators that represent the deter-
in them, crossing these variables with the determinants, characteristics, minants, characteristics, and results of electronic government, as
and results encountered in the literature review presented previously. well as each and every variable that constitutes these three
The mapping undertaken between the literature and the practice of dimensions.
evaluating electronic government suggests that a comprehensive elec-
tronic government model could be made up of three conceptual di- 5. Operationalization and uses for the model
mensions (determinants, characteristics, and results) and 21 observable
variables (5 determinants, 9 characteristics, and 7 results). No previous It is important to mention that the operationalization of the model
framework including a balance between variables associated with the must respond to a process of consensus building with decision makers,
determinants, characteristics, and results of electronic government both in terms of the objectives of the evaluation and the indicators that
was found in the literature. Moreover, a greater part of the indicators can be used to create an index for each variable. In this section of the
are limited to analyzing the status of variables for characteristics and paper we present in detail a hypothetical operationalization of one of
determinants of electronic government. Very few models include vari- the variables as an illustration of the model's application, followed by
ables for results. The model proposed in this article, which is composed a demonstration of the possible uses of the evaluation model we are
of these 21 variables, constitutes an alternative for evaluating electronic proposing.3
government in a more comprehensive way.
The model considers the possibility of using several sources of 5.1. Example of potential operationalization of variables and indicators of
information to make up indices for the 21 variables it includes. the model
Generally, the indices for each variable in the model may incorporate
information from national indicators associated with information and In the following paragraphs, we present a possible definition for one
communication technology, budget indicators and organizational of the variables included in the model. The conceptual and operational
indicators for different departments or branches of government
(executive, legislative, and judicial), data on the quality of web por-
tals that represent each and every one of these ministries or branches, 3
Showing a detailed operationalization for each of the 21 variables goes beyond the
direct observation of web portals, and data gathered through surveys scope of this paper. Readers interested in looking at plausible sets of measures for all
of those responsible for ICT in the same departments and branches of variables may look at Gil-Garcia and Luna-Reyes (2007).
L.F. Luna-Reyes et al. / Government Information Quarterly 29 (2012) 324–334 329

definition of the variable made here considers evaluation of electronic which variables to include and how much weight to put in each of
government at the country level and with the purpose of comparing them.
different countries within a region or continent. The variable se-
lected is quality of information and existing data. Below we suggest
B ¼ 2=10 Existence of the Information
a definition, as well as indicators and a formula to combine the in-
þ 4=10 Quality of Information
dicators in order to obtain an index on a scale of 100 points. The þ 4=10 Quality and Definition of Data
proposed formula can be used to calculate the index for a ministry ¼ 2=10 B1  10 þ 4=10ðB2 þ B3 þ B4Þ=3  10 þ 4=10ðB5 þ B6Þ=2
or government body, which may be considered a unit of analysis.  10:
The data selected in the example can be obtained by means of a
survey given to the CIO of each unit. The results from each of
the ministries selected in the sample for the country can be aggre-
gated using an average to obtain an index for the entire country. 5.2. Uses for the model
While it is clear that the indicators suggested are not perfect,
the main criterion for which they were chosen was that they In order to illustrate the use of the model to describe a specific
were available or easy to obtain. In this way, it is possible to gener- country or to compare different countries, in this section we will
ate a low-cost prototype that shows the value of indices, and sparks use a numerical example, which hypothetically represents the case
debate about the best indicators for each variable and the ways of of Mexico and three other countries. Metrics associated with each
institutionalizing their collection, for example, in countries within a variable were selected on the basis of the literature review and the
region. analysis of evaluation reports (Gil-Garcia & Luna-Reyes, 2007). Main
data sources considered in the metrics include country ICT indicators,
5.1.1. Quality of information and existing data perceptions from CIOs obtained through a survey, and direct observa-
This variable measures the existence of reliable information about tions of websites. Data used in this example are random numbers for
government departments and branches, government processes and all of the metrics associated with perceptions and information that in
services delivered through traditional channels, the way it could be a real application of the model would be supplied by those responsi-
posted online or offered through other means, or whether it can be ble for ICT in the ministries and branches of government (executive,
used to integrate convergent systems that facilitate offering compre- legislative, and judicial). In order to generate the values, truncated
hensive services to citizens. The quality of the structure and defini- normal distribution was used to adjust the possible values of each
tions of the data have a big influence on the type of system that can metric. The same random process was followed to generate metrics
be developed and its effects are usually reflected in the adequacy of associated with the observation of departmental web sites and por-
the system to support processes and decisions within the organization. tals. Finally, some data included in this example is associated with
Moreover, the nonexistence of data, or its poor quality, is one of the metrics that can be obtained from secondary sources. The source of
main expenses in the process of developing information systems. It each of these indicators was specifically defined when each variable
is for this reason that they are included as one of the variables in the was defined (Gil-Garcia & Luna-Reyes, 2007). In general terms, the
determinants dimension. The data available and its quality deter- indicators in this category are those associated with telecommunica-
mine the functionality that can be included in applications used in tions, education, health, economy, and transparency. Also in general
electronic government. terms, the sources used were the International Telecommunications
In this example, we are creating a variable index on a scale of 100 Union, the World Bank, the World Economic Forum, International
points. The suggested metrics associated with the variable are Transparency, UNESCO, and local national statistics offices, such as
thought to be responses to questions that would be included in a the National Institute for Statistics, Geography and Informatics
questionnaire given to the parties responsible for information tech- (INEGI) in the case of Mexico.
nology areas in the selected ministries, departments, and agencies
(units of analysis) on a scale of 10 points. The questions proposed, 5.2.1. Using the model as a tool for describing the situation of a country
which are written below, were selected from a group of metrics that When the variables presented in Table 2 from the previous section
have been successfully used to assess the readiness of government are placed in a single table, it not only constitutes a way of presenting
organizations to initiate ICT projects involving systems integration the variables in the model, but also provides a way of presenting the
(Cresswell, Pardo, Canestraro, Dawes, & Juraga, 2005; Dawes, Pardo, & results to decision makers and those responsible for electronic gov-
Cresswell, 2004). ernment in the country where the model is used, functioning as a
control panel where every decision maker can see the overall situation
B1. Generally, all necessary information to create websites and
of their country. Fig. 2 represents the numerical example described at
systems exists.
the beginning of this section. Looking at the panel, it has 24 indices
B2. The information necessary to create websites and systems is
that describe the hypothetical situation of electronic government in
complete.
Mexico. Three of the indicators correspond to the mathematical aver-
B3. The information needed to create websites and system is free
age for the variables in each dimension, whereas the other 21 represent
of errors.
the different variables that integrate the model.
B4. The information needed to create websites and systems can be
Looking at the indices that total each dimension, it could be said
obtained when it is required.
that this country is relatively balanced in each of the three dimen-
B5. We maintain dictionaries with precise data for all of the infor-
sions, with index values that lean toward the center of the scale in
mation we need to create websites and systems.
each. The dimension in which this country appears more advanced
B6. High quality methods are available for all the data we need to
is the characteristics (50.4), followed by determinants (47.4) and results
create websites and systems.
(45.9). Nonetheless, there are opportunities in this country for im-
For the purpose of compiling the index, metrics are grouped into provement in each of the three dimensions, and having access to the
three sub-variables as shown in the following formula. We decided indices on the control panel is a first step toward informing practice.
to give greater weight to quality of information and its definition than Decision makers and government practitioners in this country would
whether it exists or not. However, this approach is only a suggested be able to create plans to improve their general performance and
way to integrate this component. When developing an actual evalua- some specific determinants, characteristics, and results according to
tion, researchers or practitioners should have an agreement regarding their strategic plans.
330 L.F. Luna-Reyes et al. / Government Information Quarterly 29 (2012) 324–334

Determinants Characteristics Resultados


B. Quality of Information and
G. Quality of Information
Existing Data to Feed Systems 50.6 52.7 P. Statistics on Systems Usage 28.4
available on Sites and Systems

C.Technological Infrastructure
52.4 H. Services 48.9 Q. Quality of Public Services 47.3
And Compatibility
D. Organizational and
52.3 I. Interaction 47.5 R. Efficiency and Productivity 53.0
Management Characteristics
E. Existing Legal and S. Effectiveness of Programs
50.9 J. Integration 52.0 53.3
Institutional Framework and Policies
T.Transparency and
F. Potential Demand 30.8 K. Personalization 52.4 41.9
Accountability
L. Security 53.0 U.Citizen or user participation 47.4
V. Changes in the Regulatory
M. Privacy 45.9 50.0
Framework
N. Accessibility 47.8
O. Usability and Usefulness 53.3
Total Determinants 47.4 Total Characteristics 50.4 Total Results 45.9

Fig. 2. Numerical example of the control panel for the case of Mexico. This control panel was generated from a significant number of random indicators. It has only been included to
illustrate the potential use of the model, and in no way represents the situation in Mexico.

The answer to the question of how to direct improvement efforts which is in fact the second potential use for the model. Given that the
can be found in the variables for each dimension and, of course, in model has 21 indices, there are 21 values for comparing countries in a
the objectives set forth in the strategy of each country. In this country, region. As an illustration, Fig. 3 shows the aggregated indices for the
for example, the variable that has the most opportunity for improve- three dimensions for Mexico and three more hypothetical countries.
ment in the determinants dimension is potential demand (30.8). This The indices of the countries can be used to create rankings of countries
could mean that there is still a problem in terms of the digital divide in a region. In the determinants dimension, Country 3 would occupy
and the country needs to do something about it in order to make first place, whereas Country 2 is the worst in this small imaginary
better progress in its e-government strategies. In the case of charac- region. That is, the impacts of electronic government projects and
teristics, the area of greatest opportunity for development is privacy applications in Country 2 could be limited in terms of impact or
of systems, followed by accessibility and interaction. These would also results, even though they may achieve suitable levels of functionality
be areas where the country needs to improve. Finally, in the area of and quality, if the country does not invest in the determinants dimension.
results, the area that has the greatest opportunity for improvement The mean and standard deviation for each column in this figure also
is the system usage variable. Even though general descriptions can serve as reference values. Country 2, for example, is located more than
be made on the basis of this panel, the values of the variables take on one standard deviation to the left of the mean in all three dimensions.
greater relevance when they are interpreted based on the objectives Country 3 falls more than one standard deviation to the right of the
and plans for each country. In other words, a low number in a particular mean in the characteristics and results dimensions. The other two coun-
area that is not a priority for a specific country is not necessarily a tries in the group fall in the middle of the distribution. Having this compar-
negative aspect. ative information and using it as a benchmarking tool is yet another way to
The control panel can be adopted by countries in order to identify inform e-government practice, allowing decision makers and the public to
indices that best represent the national situation in relation to its know about the comparative success of their country, areas of opportunity
goals and objectives. That is, in this example, the cumulative indices to invest in, and countries to contact for potential best practices.
for each dimension are obtained by means of a mathematical average,
and perhaps this would be the best way of doing so in the initial 5.2.3. Using the model as a tool for finding specific problems
prototype, since any different weighting would have to be agreed Given that the main control panel could be the result of adding
upon by those using the model. Nevertheless, every country would control panels generated in branches and ministries of individual
be able to assign its own weighting according to the objectives of governments, decision makers can trace the departments or branches
its electronic government development strategy in order to obtain with the most problematic values in each variable, and apply strate-
indices that better reflect the current status of the strategy's pro- gies and policies focused on the specific problem observed there. In
gress. In fact, the control panel might even constitute an instrument
for establishing the objectives of plans and programs, and also pro-
viding follow-up, which is another way to inform practice.
It has been suggested in this section that the variables with the
lowest values are those with the greatest opportunity for improve-
ment. However, perhaps the best answer to the question of where
governments should pay more attention or direct more resources
lies in the strategy of each country, or even better, in areas that
have greater impact on the creation of public value. That is to say,
the model provides decision makers and practitioners with a frame-
work to build and test theories about the evaluation process through
regression-type or other mathematical methods, especially if data is
gathered in several countries or regions.

Fig. 3. Table comparing countries with indices in each of the 3 dimensions. The indices
5.2.2. Using the model as a comparison tool for the three dimensions for hypothetical countries 1, 2, and 3 were generated directly
Another way of interpreting the indices of a country is to compare by the authors based on their judgment, in order to illustrate how the model can be
them against the other countries in a region or the world as a whole, used as a comparative tool.
L.F. Luna-Reyes et al. / Government Information Quarterly 29 (2012) 324–334 331

but also researchers interested in the phenomenon in a region to


build statistical models that help identify decision variables that
can bring about the greatest impacts on the results of electronic
government.

6. Conclusions

It is clear that information and communication technologies have


great potential to improve governments around the world. However,
it is necessary to have a way of measuring and evaluating e-government
efforts that allows decision makers at different levels to assess and
follow up on the progress made in each of their initiatives and pro-
jects. The model put forward in this study constitutes a conceptual
framework that will not only allow comparisons between countries,
Fig. 4. Privacy example: components or subcategories. but also help to identify areas of opportunity for improvements that
will make it possible to fulfill the e-government goals and objectives
proposed at the agency, ministry, and country level. We think that
this model is a necessary initial effort to build more integrated and
addition, this control panel may be built up by a branch or ministry comprehensive methodologies for measuring and evaluating elec-
(e.g., health or education), which not only allows comparisons tronic government around the world. Models like this take into con-
between countries, but also between different government branches sideration not only the current state of the technology applications,
and ministries in the region. Fig. 4 contains the indices of the but also the capabilities of government agencies, and, more importantly,
variables for the characteristics dimension in our hypothetical the actual impacts on the welfare of citizens, which has been recognized
Mexico, where the variable with the lowest index is Privacy (45.9). as the next step for high-level evaluation models (Moon, Welch, &
Decision makers in the country may, should they desire to do so, Wong, 2005).
open the sub-variables of this variable in order to analyze their coun- This paper explains the creation and potential uses of a multidimen-
try in greater detail. With these results open, it is possible to identify sional model for measuring electronic government. From a systematic
that the existence of a privacy policy sub-variable has the lowest analysis of the literature on electronic government and a review of
value under this variable. In this way, the decision maker would recent evaluation practices, a model is proposed with 21 variables
find that one way to improve this index is to encourage the adoption grouped in three dimensions: (1) determinants of electronic govern-
of privacy policies on the websites of government ministries and ment, (2) characteristics of electronic government, and (3) results of
branches. electronic government. We argue that these three dimensions capture
The sub-variable can also be opened within each of the units of a basic underlying theory of e-government success in transforming gov-
analysis (government ministries and branches) to identify the areas ernment and creating value for government, citizens, and other end
with the main opportunities for improvement (see Fig. 5). In this users. This model directly responds to one of the concerns expressed
case, for example, the decision maker can see that the Ministry of by several scholars, including Moon et al. (2005), regarding high-level
Economy has the lowest index under the “Existence of Privacy Poli- evaluation models—namely, the lack of theory behind their conceptual-
cies” sub-variable and then take the steps it deems necessary to fix ization. The model proposed in this article is clearly based on recent
that disparity. The ability to open information about each variable studies about electronic government and attempts to synthesize their
included in the model not only helps teams of decision makers, main findings in a coherent set of variables and dimensions.

Fig. 5. Control panel and values for one subcategory in each observation unit.
332 L.F. Luna-Reyes et al. / Government Information Quarterly 29 (2012) 324–334

We have described the way in which data that helps to characterize (ECLAC). The results and conclusions expressed in this document
the level of development in electronic government can be organized are the sole responsibility of the authors and do not necessarily
and used, informing decision makers and practitioners about the cur- reflect the opinions or policies of CONACYT or ECLAC.
rent status of their efforts and helping them in the planning process. 2. The authors want to acknowledge the useful assistance of Japhet
However, as we have suggested previously, in order to gain a better Hernández Vaquero in preparing the final version of the manuscript.
understanding of the importance of each variable in a given country,
it would be important to become familiar with the qualitative ele-
ments, such as those associated with the strategy and goals of each
country. Therefore, to better inform the practice of e-government, References
our first consideration highlights the need for both quantitative 6, P. (2001). e-Governance. Do digital aids make a difference in policy making? In J. E. J.
and qualitative information describing the general national context. Prins (Ed.), Designing e-government. On the crossroads of technological innovation and
This information about the context constitutes an important input to institutional change (pp. 7–27). The Hague, Netherlands: Kluwer Law International.
Accenture (2004). eGovernment leadership: High performance, maximum value. The
better interpret numerical results of the different variables in the
Government Executive Series (pp. 108). : Accenture.
model. Additionally, some variables (services, interaction, results of ACSI (1994). American Customer Satisfaction Index, 2006. http://www.theacsi.org/
the system, etc.) may be more useful to decision makers when mea- overview.htm from
sured in a particular ministry or branch of government, but could be Ambite, J. L., Arens, Y., Bourne, W., Feiner, S., Gravano, L., & Hatzivassiloglou, V. (2002).
Data integration and access. In W. J. McIver, & A. K. Elmagarmid (Eds.), Advances in
added to the model in order to have a broader vision of a country's digital government. Technology, human factors, and policy. Norwell, MA: Kluwer Ac-
overall status. For this reason, it has been suggested that different ademic Publishers.
government ministries or branches be used as units of observation Andersen, D. F., Belardo, S., & Dawes, S. S. (1994, Summer). Strategic information
management: Conceptual frameworks for the public sector. Public Productivity
and data collection. If a region would like to adopt this framework, and Management Review, 17, 335–353.
countries in the region would need to agree on measures and units Andersen, D. F., & Dawes, S. S. (1991). Government information management: A primer
of analysis (departments, provinces, states, municipalities, local gov- and casebook. Englewood Cliffs, NJ: Prentice Hall.
Bajjaly, S. T. (1999). Managing emerging information systems in the public sector.
ernments, etc.) to make comparisons between countries simpler and Pubic Performance & Management Review, 23(1), 40–47.
more useful. However, despite the fact that information from a limited Baker, D. L. (2009). Advancing e-government performance in the United States through
collection of government departments and branches is proposed for enhanced usability benchmarks. Government Information Quarterly, 26, 82–88.
Barki, H., Rivard, S., & Talbot, J. (1993). Toward an assessment of software development
comparative purposes, the government of a country could apply the
risk. Journal of Management Information Systems, 10, 203–223.
same methodology for capturing data about other government minis- Barrett, K., & Greene, R. (2000). Powering up: How public managers can take control of
tries or branches, using that information for decision making and mon- information technology. Washington, DC: Congressional Quarterly Press.
Batini, C., Viscusi, G., & Cherubini, D. (2009). GovQual: A quality driven methodology for
itoring the progress of their national e-government strategy and the
e-government project planning. Government Information Quarterly, 26, 106–117.
specific progress of each government unit or e-government initiative. Bellamy, C. (2000). The politics of public information systems. In G. D. Garson (Ed.),
Collecting data to build a control panel like the one suggested in the Handbook of public information systems. New York: Marcel Dekker.
previous section may include a variety of data gathering strategies. For Birckmayer, J. D., & Weiss, C. H. (2000). Theory-based evaluation in practice. Evaluation
Review, 24(4), 407–431, http://dx.doi.org/10.1177/0193841X0002400404.
instance, it can include questionnaires to e-government managers or Brown, M. M. (2000). Mitigating the risk of information technology initiatives: Best
CIOs in several government units (ministries, departments, agencies, practices and points of failure for the public sector. In G. D. Garson (Ed.), Handbook
etc.), and in the country as a whole. Another kind of instrument that of Public Information Systems. New York: Marcel Dekker.
Brown, M. M. (2001). The benefits and costs of information technology innovations: An
can be used is observation forms for websites from selected govern- empirical assessment of a local government agency. Pubic Performance & Manage-
ment units. Finally, a third kind of instrument should be applied to ment Review, 24(4), 351–366.
end users, citizens, and other government stakeholders. This last kind Brown, M. M., & Brudney, J. L. (2003). Learning organizations in the public sector? A
study of police agencies employing information and technology to advance knowl-
of measurement instrument is particularly important for understanding edge. Public Administration Review, 63(1), 30–43.
results and value creation from e-government initiatives. We are aware of Brown, M. M., & Brudney, J. L. (2004). Achieving advanced electronic government
the complexity of this effort, but the fact that previous models have been services: Opposing environmental constraints. Public Performance & Management
Review, 28(1), 96–114.
too simplistic to make useful comparisons has been clearly acknowledged
Caffrey, L. (1998). Information sharing between & within governments. London: Com-
in the previous literature (Moon et al., 2005), and the model proposed in monwealth Secretariat.
this paper is a first step towards more comprehensive evaluations. CELA-IESE, & DMR Consulting (2005). Indicador de la Sociedad de la Información (pp. 40).
Barcelona: IESE/DMR Consulting.
Finally, we believe that the model constitutes a framework to inform
Chevallerau, F. -X. (2005). eGovernment in the member states of the European Union
practice, not only as a tool for understanding the current situation of (pp. 555). Brussels: IDABC eGovernment Observatory.
e-government efforts or for benchmarking purposes. We believe that Choudrie, J., Ghinea, G., & Weerakkody, V. (2004). Evaluating global e-government sites:
the model provides a framework for decision makers to develop and A view using web diagnostic tools. Electronic Journal of e-Government, 2(2), 105–114.
Cook, M., & LaVigne, M. (2002). Making the local e-gov connection. Retrieved May 24,
test different strategies and policies through the evaluation process, 2002, from. www.urbanicity.org/FullDoc.asp?ID=36
leading to the continuous improvement of the underlying theories of Cresswell, A. M. (2004). Return on investment in information technology: A guide for managers.
e-government practice. Our current research efforts are oriented to- Albany, NY: Center for Technology in Government, University at Albany, SUNY.
Cresswell, A. M., Pardo, T. A., Canestraro, D. S., Dawes, S. S., & Juraga, D. (2005). Sharing
ward empirically validating the framework, at least partially, in projects justice information: A capability assessment toolkit. Albany, NY: Center for Technol-
related to the evaluation of the Mexican e-government state portals, ogy in Government, University at Albany, SUNY.
and also in projects related to the diagnosis of e-government progress Davis, F. D. (1989). Perceived usefulness, perceived ease to use, and user aceptance of
information technology. MIS Quarterly, 13(3), 319–340.
at the municipal and state levels. Future research should explore to Dawes, S. S. (1996). Interagency information sharing: Expected benefits, manageable
what extent more comprehensive evaluation models could capture risks. Journal of Policy Analysis and Management, 15(3), 377–394.
some of the complexity of e-government efforts at different levels of Dawes, S. S., & Nelson, M. R. (1995). Pool the risks, share the benefits: Partnership in IT
innovation. In J. Keyes (Ed.), Technology trendlines. Technology success stories from
governments and in different countries. Better e-government evalu-
today's visionaries. New York: Van Nostrand Reinhold.
ations will lead to smarter investments and, therefore, clearer and Dawes, S. S., & Pardo, T. A. (2002). Building collaborative digital government sys-
greater benefits for citizens and the society at large. tems. Systematic constraints and effective practices. In W. J. McIver, & A. K.
Elmagarmid (Eds.), Advances in Digital Government. Technology, Human Factors,
and Policy (pp. 259–273). Norwell, MA: Kluwer Academic Publishers.
Acknowledgments Dawes, S. S., Pardo, T. A., & Cresswell, A. M. (2004). Designing electronic government
information access programs: A holistic approach. Government Information Quarterly,
1. This study was partially financed by the National Science and 21(1), 3–23.
Dawes, S. S., Pardo, T. A., Simon, S., Cresswell, A. M., LaVigne, M., Andersen, D., et al.
Technology Council (CONACYT-Mexico), project no. 84735 and (2004). Making smart IT choices: Understanding value and risk in government IT
by the Economic Commission for Latin America and the Caribbean investments. Albany, NY: Center for Technology in Government.
L.F. Luna-Reyes et al. / Government Information Quarterly 29 (2012) 324–334 333

Dawes, S. S., & Prefontaine, L. (2003). Understanding new models of collabora- Klein, H. K. (2000). System development in the federal government: How technology
tion for delivering government services. Communications of the ACM, 46(1), influences outcomes. Policy Studies Journal, 28(2), 313.
40–42. Kraemer, K. L., King, J. L., Dunkle, D. E., & Lane, J. P. (1989). Managing information sys-
DeMaio, C. D., Frost, M., Street, T., Tettelbach, B., Parker, D., & D'Agostino, D. (2002). tems. Change and control in organizational computing. San Francisco, CA: Jossey-
Creating a performance-based electronic government (pp. 104). Arlington, VA: The Bass.
Performance Institute. Kum, H. -C., Duncan, D. F., & Stewart, C. J. (2009). Supporting self-evaluation in local
Donker-Kuijer, M. W., de Jong, M., & Lentz, L. (2010). Usable guidelines for usable government via knowledge discovery and data mining. Government Information
websites? An analysis of five e-government heuristics. Government Information Quarterly, 26, 295–304.
Quarterly, 27(3), 254–263, http://dx.doi.org/10.1016/j.giq.2010.02.006. La Porte, T. M., Demchak, C. C., & de Jong, M. (2002). Democracy and bureaucracy in the
Edmiston, K. D. (2003). State and local e-government: Prospects and challenges. American age of the web: Empirical findings and theoretical speculations. Administration and
Review of Public Administration, 33(1), 20–45. Society, 34(4), 411–446.
Esteves, J., & Joseph, R. C. (2008). A comprehensive framework for the assessment of La Porte, T. M., Demchak, C. C., & Friis, C. (2001). Webbing governance: Global trends
eGovernment projects. Government Information Quarterly, 25, 118–132. across national-level public agencies. Communications of the ACM, 44(1), 63–67.
Eurostat (2005, 27–28 April). Measuring e-government. Paper presented at the working Landsbergen, D. J., & Wolken, G. J. (2001). Realizing the promise: Government information
party on indicators for the information society, Paris. systems and the fourth generation of information technology. Public Administration
Forrester, J. P., & Watson, S. S. (1994). An assessment of public administration journals: Review, 61(2), 206–220.
The perspective of editors and editorial board members. Public Administration Review, Lassman, K. (2001). The digital state 2001 (pp. 118). Washington, DC: The Progress &
54(5), 474–482. Freedom Foundation.
Fountain, J. E. (2001). Building the virtual state. Information technology and institutional Lee, G., & Perry, J. L. (2002). Are computers boosting productivity? A test of the paradox
change. Washington, D.C.: Brookings Institution Press. in state governments. Journal of Public Administration Research and Theory, 12(1),
Fountain, J. E. (2003). Prospects for improving the regulatory process using e-rulemaking. 77–103.
Communications of the ACM, 46(1), 43–44. Mahler, J., & Regan, P. M. (2003). Developing intranets for agency management. Public
Gant, D. B., Gant, J. P., & Johnson, C. L. (2002). State web portals: Delivering and financing Performance and Management Review, 26(4), 422–432.
e-service (pp. 60). Arlington, VA: The PricewaterhouseCoopers Endowment for The Minges, M. (2005). Evaluation of e-readiness indices in Latin America and the Caribbean
Business of Government. (pp. 44). Santiago: CEPAL.
Garson, G. D. (2004). The promise of digital government. In A. Pavlichev, & G. D. Garson Misic, M. M., & Johnson, K. L. (1999). Benchmarking: A tool for Web site evaluation and
(Eds.), Digital government: Principles and best practices (pp. 2–15). Hershey, PA: improvement. Internet Research: Electronic Networking Applications and Policy, 9(5),
Idea Group Publishing. 383–392.
Gelders, D., Brans, M., Maesschalck, J., & Colsoul, N. (2010). Systematic evaluation of Mitra, R. K., & Gupta, M. P. (2008). A contextual perspective of performance assessment
public participation projects: Analytical framework and application based on two in eGovernment: A study of Indian police administration. Government Information
Belgian neighborhood watch projects. Government Information Quarterly, 27(2), Quarterly, 25, 278–302.
134–140, http://dx.doi.org/10.1016/j.giq.2009.10.003. Moon, M. J. (2002). The evolution of e-government among municipalities: Rhetoric or
Gil-García, J. R. (2004). Information technology policies and standards: A comparative reality? Public Administration Review, 62(4), 424–433.
review of the states. Journal of Government Information, 30(5), 548–560. Moon, M. J., Welch, E. W., & Wong, W. (2005, January 3–6). What drives global e-
Gil-Garcia, J. R. (2005). Exploring the success factors of state website functionality: An governance? An exploratory study at a macro level HICSS. Proceedings of the
empirical investigation. Paper presented at the National Conference on Digital 38th Annual Hawaii International Conference on System Sciences (HICSS'05), vol.
Government Research, Atlanta, GA. 5. (pp. 131).
Gil-Garcia, J. R., & Luna-Reyes, L. F. (2006). Integrating conceptual approaches to e- OECD (2003). The e-government imperative. Paris, France: Organisation for Economic
government. In M. Khosrow-Pour (Ed.), Encyclopedia of e-commerce, e-government Co-operation and Development.
and mobile commerce (pp. 636–643). Hershey, PA: Idea Group Inc. Olalere, A., & Lazar, J. (2011). Accessibility of U.S. federal government home pages: Sec-
Gil-Garcia, J. R., & Luna-Reyes, L. F. (2007). Modelo multi-dimensional de medición del tion 508 compliance and site accessibility statements. Government Information
gobierno electrónico para América Latina y el Caribe. Colección Documentos de Quarterly, 28(3), 303–309, http://dx.doi.org/10.1016/j.giq.2011.02.002.
Proyectos, Naciones Unidas—CEPAL from. http://www.eclac.cl/ddpe/publicaciones/ ONU (2005). Indicadores clave de las tecnologías de la información y de las comunicaciones
xml/6/28646/W124.pdf (pp. 52). Santiago de Chile.
Gouscos, D., Kalikakis, M., Legal, M., & Papadopoulou, S. (2007). A general model of Raus, M., Liu, J., & Kipp, A. (2010). Evaluating IT innovations in a business-to-government
performance and quality for one-stop e-government service offerings. Government context: A framework and its applications. Government Information Quarterly, 27(2),
Information Quarterly, 24, 860–885. 122–133, http://dx.doi.org/10.1016/j.giq.2009.04.007.
Gupta, M. P., & Jana, B. (2003). e-Government evaluation: A framework and a case Redman, T. C. (1998). The impact of poor data quality on the typical enterprise. Com-
study. Government Information Quarterly, 20(4), 365–387. munications of the ACM, 41(2), 79–82.
Harris, N. D. (2000). Intergovernmental cooperation in the development and use of Reis, F. (2005). e-Government: Internet based interaction with the European businesses
information systems. In G. D. Garson (Ed.), Handbook of public information systems. and citizens (pp. 8). Luxembourg: Eurostat.
New York: Marcel Dekker. RICYT-CYTED, UMIC, & ISCTE (2006). Manual de Lisboa Retrieved Agosto. 2006, 2006,
Heeks, R. (2005). Implementing and managing eGovernment: An international text. London: from. http://www.oei.es/mlisboa.htm
SAGE Publications. Rocheleau, B. (2003). Politics, accountability, and governmental information systems.
Heintze, T., & Bretschneider, S. (2000). Information technology and restructuring In G. D. Garson (Ed.), Public information technology: Policy and management issues
in public organizations: Does adoption of information technology affect orga- (pp. 20–52). Hershey, PA: Idea Group Publishing.
nizational structures, communications, and decision making? Journal of Public Rogers, J. D., & Kingsley, G. (2004). Denying public value: The role of the public sector
Administration Research and Theory, 10(4), 801–830. in account of the development of the internet. Journal of Public Administration
Helbig, N., Gil-García, J. R., & Ferro, E. (2005, August 11–14). Understanding the com- Research & Theory, 14(3), 371–393.
plexity of electronic government: Implications from the digital divide literature. Sandoval, R., & Gil-García, J. R. (2005, May 15–18). Assessing e-government evolution
Paper presented at the Americas Conference of Information Systems 2005, Omaha, in Mexico: A preliminary analysis of the state portals. Paper presented at the
NE, USA. 2005 Information Resources Management Association International Conference,
Hiller, J. S., & Bélanger, F. (2001). Privacy strategies for electronic government. In M. A. San Diego.
Abramson, & G. E. Means (Eds.), e-Government 2001 (pp. 162–198). Lanham, Maryland: Sharifi, M., & Manian, A. (2010). The study of the success indicators for pre-
Rowman & Littlefield Publishers. implementation activities of Iran's e-government development projects. Government
Holmes, D. (2001). e.Gov. e-business strategies for government. London: Nicholas Brealey Information Quarterly, 27(1), 63–69, http://dx.doi.org/10.1016/j.giq.2009.04.006.
Publishing. Stowers, G. N. L. (2004). Measuring the performance of e-government. In P. Lawrence
Huang, C. J., & Chao, M. -H. (2001). Managing WWW in public administration: Uses and (Ed.), e-Government series (pp. 52). Washington, DC: The Center for the Business of
misuses. Government Information Quarterly, 18(4), 357–373, http://dx.doi.org/10.1016/ Government.
S0740-624X(01)00085-5. Stufflebeam, D. L. (2003). The CIPP model for evaluation. Paper presented at the annual
Jiang, J., & Klein, G. (2000). Software development risks to project effectiveness. Journal conference of the Oregon Program Evaluators Network (OPEN), Portland, Oregon.
of Systems and Software, 52, 3–10. Thomas, J. C., & Streib, G. (2003). The new face of government: Citizen-initiated contacts
Kaisara, G., & Pather, S. (2011). The e-Government evaluation challenge: A South African in the era of e-government. Journal of Public Administration Research and Theory,
Batho Pele-aligned service quality approach. Government Information Quarterly, 13(1), 83–101.
28(2), 211–221, http://dx.doi.org/10.1016/j.giq.2010.07.008. UNPAN (2008). UN e-government survey 2008: From e-government to connected gover-
Kaplan, D., Krishnan, R., Padman, R., & Peters, J. (1998). Assessing data quality in account- nance. New York: United Nations Publications.
ing information systems. Communications of the ACM, 41(2), 72–77. Valdés, G., Solar, M., Astudillo, H., Iribarren, M., Concha, G., & Visconti, M. (2011).
Karunasena, K., & Deng, H. (2012). Critical factors for evaluating the public value of e- Conception, development and implementation of an e-government maturity
government in Sri Lanka. Government Information Quarterly, 29(1), 76–84, http: model in public agencies. Government Information Quarterly, 28(2), 176–187,
//dx.doi.org/10.1016/j.giq.2011.04.005. http://dx.doi.org/10.1016/j.giq.2010.04.007.
Kaylor, C., Deshazo, R., & Van Eck, D. (2001). Gauging e-government: A report on van den Haak, M. J., de Jong, M. D. T., & Schellens, P. J. (2009). Evaluating municipal
implementing services among American cities. Government Information Quarterly, websites: A methodological comparison of three think-aloud variants. Government
18(4), 293–307. Information Quarterly, 26(1), 193–202, http://dx.doi.org/10.1016/j.giq.2007.11.003.
Kellogg, W. A., & Mathur, A. (2003). Environmental justice and information technolo- Verdegem, P., & Verleye, G. (2009). User-centered e-government in practice: A
gies: Overcoming the information-access paradox in urban communities. Public comprehensive model for measuring user satisfaction. Government Information
Administration Review, 63(5), 573–585. Quarterly, 26, 487–497.
334 L.F. Luna-Reyes et al. / Government Information Quarterly 29 (2012) 324–334

Wang, Y. -S., & Liao, Y. -W. (2008). Assessing eGovernment systems success: A validation Luis Felipe Luna-Reyes is a Professor of Business at the Universidad de las Américas
of DeLone and McLean model of information systems success. Government Informa- Puebla in Mexico. He holds a Ph.D. in Information Science from the University at Al-
tion Quarterly, 25, 717–733. bany. Luna-Reyes is also a member of the Mexican National Research System. His re-
Weiss, C. H. (1991). Investigación Evaluativa: Métodos para Determinar la Eficiencia de search focuses on electronic government and modeling collaboration processes in
los Programas de Acción. Mexico City: Trillas. the development of information technologies across functional and organizational
Weiss, C. H. (1997). How can theory-based evaluation make greater headway? Evaluation boundaries.
Review, 21(4), 501–524, http://dx.doi.org/10.1177/0193841X9702100405. J. Ramon Gil-Garcia is an Associate Professor in the Department of Public Administra-
Welch, E. W., Hinnant, C. C., & Moon, M. J. (2005). Linking citizen satisfaction with e- tion and the Director of the Data Center for Applied Research in Social Sciences at Cen-
government and trust in government. Journal of Public Administration Research & tro de Investigación y Docencia Económicas (CIDE) in Mexico City. Currently, he is also a
Theory, 15(3), 371–391. Research Fellow at the Center for Technology in Government, University at Albany,
Welch, E. W., & Wong, W. (2001). Global information technology pressure and govern- State University of New York (SUNY) and a Faculty Affiliate at the National Center
ment accountability: The mediating effect of domestic context on website openness. for Digital Government, University of Massachusetts Amherst. His research interests
Journal of Public Administration Research and Theory, 11(4), 509–539. include collaborative electronic government, inter-organizational information integra-
West, D. M. (2004). e-Government and the transformation of service delivery and cit- tion, adoption and implementation of emergent technologies, digital divide policies,
izen attitudes. Public Administration Review, 64(1), 15–27. new public management, and multi-method research approaches.
West, D. M. (2008). Improving technology utilization in electronic government Georgina Romero holds a Masters in Industrial Engineering from the Universidad de las
around the world. 2008, 2009, from. http://www.brookings.edu/reports/2008/ Américas Puebla in Mexico. She is currently a faculty member at the Pacific Business
0817_egovernment_west.aspx School in Sinaloa, Mexico.

You might also like