Download as pdf or txt
Download as pdf or txt
You are on page 1of 275

Lecture Notes in Information Systems and Organisation 13

Cecilia Rossignoli
Mauro Gatti
Rocco Agrifoglio Editors

Organizational
Innovation and
Change
Managing Information and
Technology
Lecture Notes in Information Systems
and Organisation

Volume 13

Series editors
Richard Baskerville, Decatur, USA
Marco De Marco, Roma, Italy
Nancy Pouloudi, Athens, Greece
Paolo Spagnoletti, Roma, Italy
Dov Te’eni, Tel Aviv, Israel
Jan vom Brocke, Vaduz, Liechtenstein
Robert Winter, St. Gallen, Switzerland
More information about this series at http://www.springer.com/series/11237
Cecilia Rossignoli Mauro Gatti

Rocco Agrifoglio
Editors

Organizational Innovation
and Change
Managing Information and Technology

123
Editors
Cecilia Rossignoli Rocco Agrifoglio
Department of Business Administration Department of Management,
University of Verona Accounting and Economics
Verona University of Naples “Parthenope”
Italy Naples
Italy
Mauro Gatti
Department of Management
University of Rome “La Sapienza”
Rome
Italy

ISSN 2195-4968 ISSN 2195-4976 (electronic)


Lecture Notes in Information Systems and Organisation
ISBN 978-3-319-22920-1 ISBN 978-3-319-22921-8 (eBook)
DOI 10.1007/978-3-319-22921-8

Library of Congress Control Number: 2015946779

Springer Cham Heidelberg New York Dordrecht London


© Springer International Publishing Switzerland 2016
This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part
of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations,
recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission
or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar
methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this
publication does not imply, even in the absence of a specific statement, that such names are exempt from
the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this
book are believed to be true and accurate at the date of publication. Neither the publisher nor the
authors or the editors give a warranty, express or implied, with respect to the material contained herein or
for any errors or omissions that may have been made.

Printed on acid-free paper

Springer International Publishing AG Switzerland is part of Springer Science+Business Media


(www.springer.com)
Contents

Introducing and Discussing Information and Technology


Management for Organizational Innovation and Change . . . . . . . . . . . 1
Cecilia Rossignoli, Mauro Gatti and Rocco Agrifoglio

Part I ICT, Organizational Innovation and Change

A Methodology for the Impact Assessment of a g-Cloud Strategy


for the Italian Ministry of the Economic Development. . . . . . . . . . . . . 11
Francesca Spagnoli, Francesco Bellini and Alessandra Ghi

Italy’s One-Stop Shop: A Case of the Emperor’s New Clothes? . . . . . . 27


Walter Castelnovo, Maddalena Sorrentino and Marco De Marco

The Determinants of IT Adoption by SMEs: An Agenda


for Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Riccardo Spinelli

Technology Applied to the Cultural Heritage Sector has not (yet)


Exceeded Our Humanity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Lucia Marchegiani and Gloria Rossi

The Impact of the Implementation of the Electronic Medical Record


in an Italian University Hospital. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Alessandro Zardini, Cecilia Rossignoli and Bettina Campedelli

Technological Cycle and S-Curve: A Nonconventional Trend


in the Microprocessor Market . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
G. Ennas, F. Marras and M.C. Di Guardo

v
vi Contents

The IS Heritage and the Legacy of Ciborra . . . . . . . . . . . . . . . . . . . . 89


Paolo Depaoli, Andrea Resca, Marco De Marco and Cecilia Rossignoli

Collective Awareness Platform for Sustainability


and Social Innovation (CAPS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Antonella Passani, Francesca Spagnoli, Francesco Bellini,
Alessandra Prampolini and Katja Firus

Business Model in the IS Discipline: A Review and Synthesis


of the Literature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
G. Pozzi, F. Pigni, C. Vitari, G. Buonanno and E. Raguseo

IS Governance, Agility and Strategic Flexibility


in Multi-approaches Based Management Companies . . . . . . . . . . . . . . 131
Mohamed Makhlouf and Oihab Allal-Chérif

Part II ICT and Knowledge Management

Information, Technology, and Trust: A Cognitive Approach


to Digital Natives and Digital Immigrants Studies . . . . . . . . . . . . . . . . 147
Francesca Marzo and Alessio Maria Braccini

When Teachers Support Students in Technology Mediated


Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
Leonardo Caporarello, Massimo Magni and Ferdinando Pennarola

How Do Academic Spin-off Companies Generate and Disseminate


Useful Market Information Within Their Organizational
Boundaries? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
Tindara Abbate and Fabrizio Cesaroni

A Two Step Procedure for Integrated Inventory—Supply


Chain Management Information Systems . . . . . . . . . . . . . . . . . . . . . . 189
Daniela Ambrosino and Anna Sciomachen

Unsupervised Neural Networks for the Analysis of Business


Performance at Infra-City Level. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
Renata Paola Dameri, Roberto Garelli and Marina Resta

Design of Pre-emptive Customer Experience Management Systems


for Mobile Broadband Communications Service Providers . . . . . . . . . 217
Daniel Delibes Rodriguez and Penny Hart
Contents vii

Economic Denial of Sustainability Mitigation in Cloud


Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
Massimo Ficco and Massimiliano Rak

Brokering of Cloud Infrastructures Driven by Simulation


of Scientific Workloads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
Alba Amato, Beniamino Di Martino, Fatos Xhafa
and Salvatore Venticinque

Investigating the Impact of Digital Data Genesis Dynamic


Capability on Data Quality and Data Accessibility . . . . . . . . . . . . . . . 251
Elisabetta Raguseo, Claudio Vitari and Giulia Pozzi

An Ecological Model for Digital Platforms Maintenance


and Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
Paolo Rocchi, Paolo Spagnoletti and Subhajit Datta
Introducing and Discussing Information
and Technology Management
for Organizational Innovation and Change

Cecilia Rossignoli, Mauro Gatti and Rocco Agrifoglio

Abstract This chapter focuses on the interplay between information technology


and organizational systems. It introduces the volume, providing a brief overview of
some of the most relevant frameworks, approaches, and tools in the IS field which
will be discussed later. The volume is divided into II parts, each one focused on a
specific theme, such as ‘ICT, organizational innovation and change’ and ‘ICT and
knowledge management’.

This book explores a range of critical issues and emerging topics relevant to the
linkages between information technology and organizational systems. It encourages
debate and opens up new avenues of inquiry in the field of Information Systems,
organization and management studies, by investigating themes of growing research
interest from multiple disciplinary perspectives such as organizational innovation and
impact, information technology, innovation transfer, and knowledge management.
The title of this book, ‘Managing Information and Technology for
Organizational Innovation and Change’, already implies the understanding that
information and technology are two crucial factors for developing innovation and
for managing change within organizational contexts. Information and technology
were widely recognised by the managerial literature as a major source of com-
petitive advantage and increased business performance [1, 2]. In the last decades,
organizations have increasingly invested in Information and Communication
Technology (ICT) for improving their efficiency and effectiveness and thus for
providing an opportunity for their businesses. Indeed, ICTs were often recognized
as a way to develop organizational innovation and to lead organizational change
[3–6]. However, empirical experience has immediately shown that ICT adoption is

C. Rossignoli
Department of Business Administration, University of Verona, Verona, Italy
M. Gatti
Department of Management, University of Rome “La Sapienza”, Rome, Italy
R. Agrifoglio (&)
Department of Management, Accounting and Economics, University of Naples “Parthenope”,
Naples, Italy
e-mail: agrifoglio@uniparthenope.it

© Springer International Publishing Switzerland 2016 1


C. Rossignoli et al. (eds.), Organizational Innovation and Change,
Lecture Notes in Information Systems and Organisation 13,
DOI 10.1007/978-3-319-22921-8_1
2 C. Rossignoli et al.

a necessary but not sufficient condition for improving individual, group and
organizational performance, so opening the academic debate on the relevance of
managing information and technology within an organizational setting.
In respect of other disciplines, the IS literature was often reluctant to generalize
the relationships between information technology and organizational change.
Building upon the research of Pfeffer [7], Markus and Robey [5] and Orlikowski
[8], it is well-known that organizational change could be caused by information
technology—the so-called technological imperative, by the motives and actions of
information technology designers aimed at satisfying manager’s information pro-
cessing needs—the organizational imperative, and by the interaction between
information technology and its human and organizational users—the emergent
perspective. Thus, technology is both an external force influencing organizational
structure and the outcome of managers’ strategic choices and social actions. On the
other hand, as Orlikowski [8] stated, the link between technology and organizations
is affected by the human actions and by the socio-historical context where tech-
nology is developed and used.
Recognizing the existence of such different paradigms, this volume stresses the
relationships between ICT, organizational innovation and change and looks to
enhance their ties. On the other side, it also explores the role of information and
knowledge within organizational settings by emphasizing the contribution of ICT in
knowledge management activities.
The volume is divided into II sections, each one focused on a specific theme
such as ‘ICT, organizational innovation and change’ and ‘ICT and knowledge
management’. The content of each section is based on a selection of the best papers
(original double blind peer reviewed contributions) presented at the annual con-
ference of the Italian chapter of AIS, held in Genoa, Italy, in November 2014.

1 Part I: ICT, Organizational Innovation and Change

This section explores the relationships between ICT, organizational innovation, and
change. The aim of this section is to investigate the factors leading individuals and
organizations towards ICT’s adoption and usage, as well as the effects of such
technologies on working practices, interaction and communication between people,
and the organizational structure.
ICTs are part of corporate transformations in today competitive environments,
often enabling new organizational forms and business models both in the Public
and Private Sectors. Organizations expect to use the new ICT to run new processes,
innovate products and services, reduce operating costs, and improve business
management aimed at transforming their internal structures into better achieving
organizations. The adoption and usage of ICT is usually accompanied by rede-
signing the business processes and changes in the organizational structure.
Empirical evidence and academic literature have widely shown that the effective
implementation of new ICT is one of the most challenging tasks faced by managers,
Introducing and Discussing Information and Technology Management … 3

since it requires people to understand, absorb and adapt to the new requirements [9,
10]. Managers often consider the implementation and adoption of ICTs as a way for
promoting and realizing organizational and managerial changes [11–14]. However,
organizational change does not only arise from ICT adoption and usage, but also
depends upon a combination of technical and social influences which cannot always
be controlled [15, 16]. Indeed, the success or failure of ICT implementation and
adoption are mediated by a number of factors, many of which require an in-depth
understanding of the organizational context and human behaviour [10, 17–21].
This part of the volume has 10 contributions aimed at exploring the interplays
between ICT, organizational innovation and organizational change, by using dif-
ferent methodologies, theories and approaches. These researches stress the role of
ICT, discussing the limiting and encouraging factors in technology adoption and
usage and the effects of such technology on organizations arising from the inter-
action with human choices and institutional properties.
Spagnoli, Bellini, and Ghi’s paper aims to develop a methodology for evaluating
the economic, social, legal and environmental impacts of a cloud computing ini-
tiatives in the Italian PA and, in particular, in the Ministry of the Economic
Development.
Castelnovo, Sorrentino, and De Marco explore a new e-government initiative in
Italy, named municipal One-Stop Business Shops (SUAPs), developed and laun-
ched by Italian legislator in 1998 for simplifying government relations with busi-
ness and industry.
Spinelli analyzes the literature on IT adoption in SMEs and combines per-
spectives from various research streams in order to identify its determinants—
barriers and incentives. The paper explores well-established research areas and aims
at highlighting links which are underdeveloped or ignored, and provides directions
for future research.
Marchegiani and Rossi’s paper also explores the interplay between technology
and organizational change, but focusing on the effects of recent technological
innovations on the valorization of cultural heritage. This research is aimed at
identifying the sense-making that each actor confers to the technological innova-
tions, and its impact on cultural heritage valorization.
Zardini, Rossignoli and Campedelli, instead, explore the interplay between ICT
and organization within a peculiar sector of Italian PA, such as the healthcare
sector. Using the Zaharia and colleagues framework, the study investigates the
impacts of Electronic Medical Record’s (EMR) implementation in an Italian uni-
versity hospital.
Ennas, Marras and Di Guardo investigate the trends in microprocessor market in
order to understand if competition between rival technologies can be reopened after
a dominant paradigm occurs. The results show the existence of a non-conventional
S-curve trend.
Depaoli, Resca, De Marco and Rossignoli aim to assess Claudio Ciborra’s
legacy of Information Systems Studies and Organizational Studies. Comparing
Ciborra’s seminal work, ‘The Labyrinths of Information’, with papers published in
4 C. Rossignoli et al.

four top IS journals, the research shows that Ciborra’s thinking contributed to the
swing toward a more praxis-oriented attitude in the IS discipline.
Based on the social innovation literature, and digital social innovation in par-
ticular, Passani, Spagnoli, Bellini, Prampolini and Firus’s paper analyzes the social,
economic, political and environmental impacts of the Collective Awareness
Platform for Sustainability (CAPS) by using an ad hoc methodology, such as
IA4SI, developed for assessing the projects related to digital social innovations.
Pozzi, Pigni, Vitari, Buonanno, and Raguseo conduct a literature review on the
business model studies in the IS discipline. Using an electronic search, the paper
provides an overview of business model studies in IS field, highlighting the main
research streams and limitations.
Finally, using a case-study method, the paper of Makhlouf and Allal-Cherif
explores the consequences of simultaneous implementation of different process
approaches in Telkom. The research is aimed at analyzing the contributions of the
implementation of these approaches and problems resulting concerning governance,
agility and strategic flexibility.

2 Part II: ICT and Knowledge Management

This section explores the relationship between ICT and knowledge management.
The aim is to investigate how individuals, groups and organizations manage
information and knowledge and which technologies enable them to run this process
more efficiently.
The literature has widely recognized knowledge as a strategic asset for organi-
zational growth and sustained competitive advantage [9, 22–26]. Nowadays,
organizations view knowledge as a crucial resource, a key for survival and success
mainly due to high competition and increasingly dynamic environments. Unlike
before, the business complexity and the growth in information volume, velocity,
and variety have significantly increased the difficulties for individuals in managing
knowledge activities within organizational settings [9, 27]. People need advanced
effective methods and tools to take advantage of the ways that knowledge is
acquired and exploited within organizations [28, 29]. In order to face knowledge
management issues, software houses and vendors have designed various platforms
enabling organizations to develop, share and access huge quantities of available
resources from internal and external sources [30]. Recently, organizations are often
looking for new ways and tools to acquire knowledge from outside [31, 32].
Communities of practice and cloud, social and mobile platforms are some examples
[33–35].
This part of the volume has 10 contributions aimed at exploring the interplays
between information, technology, and knowledge management. Using different
methodologies, theories and approaches, these researches stress the different con-
cepts and meanings of information and knowledge, discussing the role of various
Introducing and Discussing Information and Technology Management … 5

platforms in creating, sharing and storing knowledge within an organization and


between organizations.
Marzo and Braccini’s paper aims at investigating the behavioural differences
between digital natives and digital immigrants in terms of trust and control. This
research designs an experiment based empirical study that might highlight potential
differences in trust and control dynamics between digital natives and immigrants. It
provides an insight into psychological aspects whose dynamics might influence
individuals’ behavior in teams.
Caporarello, Magni and Pennarola explore the interplay between ICT and
learning within the education sector. The paper investigates the support factors
influencing tablet-mediated learning effectiveness by stressing the role of ‘Support
Activities’ in determining it.
Abbate and Cesaroni focus on the crucial role of information within academic
spin-off companies. Using an explorative qualitative analysis, the paper explores
how academic spin-off companies generate and disseminate useful market infor-
mation within their organizational boundaries. Findings show the relevance of the
activities of generation, dissemination and integration of market information for
academic spin-off companies.
Ambrosino and Sciomachen’s paper explores product flows within the
multi-channel distribution network with the aim of minimizing logistic costs. It
describes and compares different inventory management policies and presents a
two-phase procedure aiming at integrating, in the same framework, inventory and
distribution functions thanks to information sharing.
Dameri, Garelli and Resta’s paper explores the relationships between informa-
tion, technology and organizations. The paper focuses on the unsupervised neural
networks (NN) for analysing data regarding business performance at infra-city
level. A micro-territorial dash-board based the unsupervised neural networks for
collecting business performance data and thus for supporting small territory
development policies was developed and tested in the Municipality of Genoa.
Delibes Rodriguez and Hart’s paper focuses on the Pre-emptive Customer
Experience Management Systems, tools designed for collecting the customer
experience data through network experience survey. This research explores how
those systems are designed and implemented and how they should be, and what
their benefits are for mobile broadband communication.
Ficco and Rak address their research on the topic of cloud computing, focusing
on the threats arising from cyber attacks, and ‘Economic Denial of Sustainability’
(EDoS) in particular, against cloud applications. The paper proposes an approach to
mitigate economic effects of EDoS attacks against cloud applications.
Like the Ficco and Rak research, Amato, Di Martino, Xhafa, and Venticinque’s
paper also investigates the cloud computing paradigm, but by focusing on the
different techniques and tools that support users in decision making. Using the
combination of the Grid and Cloud paradigms, it proposes a methodology that
provides the flexibility of Cloud Computing avoiding the need for users to learn
new resource access.
6 C. Rossignoli et al.

Raguseo, Vitari and Pozzi, instead, explore the relationship between ICT and
knowledge management, focusing on a peculiar platform for generating and cap-
turing data natively in digital form, integrating this data in the appropriate business
processes, and effectively managing data once produced. In particular, this research
investigates whether the development of the Digital Data Genesis dynamic capa-
bility in firms leads to valuable outputs in terms of data quality and data
accessibility.
Finally, Rocchi, Spagnoletti and Datta investigate digital platforms with par-
ticular reference to their maintenance process from the perspective of the software
vendor. The paper aims to explore the digital platform evolution processes in order
to identify new methods for guiding the emergence of complex socio-technical
systems.

References

1. Porter, M.E.: Technology and competitive advantage. J. Bus. Strategy 5(3), 60–78 (1985)
2. Melville, N., Kraemer, K., Gurbaxani, V.: Information technology and organizational
performance: an integrative model of IT business value. MIS Q. 28(2), 283–322 (2004)
3. Orlikowski, W.J.: CASE tools as organizational change: investigating incremental and radical
changes in systems development. MIS Q. 17(3), 309–340 (1993)
4. Orlikowski, W.J.: Improvising organizational transformation over time: a situated change
perspective. Inf. Syst. Res. 7(1) (1996)
5. Markus, M.L., Robey, D.: Information technology and organizational change: causal structure
in theory and research. Manag. Sci. 34(5), 583–598 (1988)
6. Ricciardi, F., Rossignoli, C., Zardini, A.: Factors influencing the strategic value of IT: a
literature review. In: Jun, Y. (ed.) Humanities, social sciences and global business
management. Singapore Management and Sport Science Institute, Singapore (2012)
7. Pfeffer, J.: Organizations and organization theory. Pitman, Marshfield (1982)
8. Orlikowski, W.J.: The duality of technology: rethinking the concept of technology in
organizations. Organ. Sci. 3(3), 398–427 (1992)
9. Gatti, M.: Cultura d’impresa, innovazione e conoscenza. In: Brondoni, S.M. (ed.)
Market-driven management, concorrenza e mercati globali. Giappichelli, Torino (2007)
10. Magni, M., Pennarola, F.: Intra-organizational relationships and technology acceptance. Int.
J. Inf. Manag. 28(6), 517–523 (2008)
11. Rossignoli, C.: Coordinamento e cambiamento. Tecnologie e processi interorganizzativi,
FrancoAngeli (2004)
12. Agrifoglio, R., Metallo, C.: ERP acceptance: the role of affective commitment. In: D’Atri, A.,
De Marco, M., Braccini, A.M., Cabiddu, F. (Eds.) Management of the interconnected world.
Springer, Berlin (2010)
13. Metallo, C.: L’evoluzione dei sistemi informativi: un’analisi nei contesti information-intensive.
ARACNE editrice, Roma (2011)
14. Mola, L., Pennarola, F., Za, S.: From information to smart society: environment, politics and
economics. Lecture Notes in Information Systems and Organisation (LNISO), vol. 5. Springer,
Berlin (2015)
15. Robey, D., Sahay, S.: Transforming work through information technology: a comparative case
study of geographic information systems in county government. Inf. Syst. Res. 7(1), 93–110
(1996)
Introducing and Discussing Information and Technology Management … 7

16. Giustiniano, L., Bolici, F.: Organizational trust in a networked world: analysis of the interplay
between social factors and information and communication technology. J. Inf. Commun.
Ethics Soc. 10(3), 187–202 (2012)
17. Davis, F.D., Bagozzi, R.P., Warshaw, P.R.: User acceptance of computer technology: a
comparison of two theoretical models. Manag. Sci. 35(8) (1989)
18. Venkatesh, V., Morris, M.G., Davis, G.B., Davis, F.D.: User acceptance of information
technology: toward a unified view. MIS Q. 27(3), 425–478 (2003)
19. Braccini, A.M.: Does ICT influence organizational behaviour? An investigation of digital
natives leadership potential. In: Spagnoletti, P. (Ed.) Organizational change and information
systems. Lecture Notes in Information Systems and Organisation, vol. 2, pp 11–19 (2013)
20. Agrifoglio, R., Metallo, C., Black, S., Ferrara, M.: Extrinsic versus intrinsic motivation in
continued Twitter usage. J. Comput. Inf. Syst. 53(1), 33–41 (2012)
21. Agrifoglio, R., Metallo, C., Lepore, L.: Success factors for using case management system in
Italian courts. Inf. Syst. Manag. (In Press)
22. Nonaka, I.: A dynamic theory of organizational knowledge creation. Organ. Sci. 5(1), 14–37
(1994)
23. Miller, D., Shamsie, J.: The resource-based view of the firm in two environments: The
Hollywood firm studios from 1936 to 1965. Acad. Manag. J. 39(3), 519–543 (1996)
24. Teece, D.J.: Capturing value from knowledge assets: the new economy, markets for
know-how, and intangible assets. Calif. Manag. Rev. 40(3), 55–79 (1998)
25. Alavi, M., Leidner, D.E.: Knowledge management and knowledge management systems:
conceptual foundations and research issues. MIS Q. 25(1), 107–136 (2001)
26. Schultze, U., Leidner, D.E.: Studying knowledge management in information systems
research: discourses and theoretical assumptions. MIS Q. 26(3), 213–242 (2002)
27. Malhotra, Y.: Integrating knowledge management technologies in organizational business
processes: getting real time enterprises to deliver real business performance. J. Knowl. Manag.
9(1), 7–28 (2005)
28. Spagnoletti, P., Resca, A.: The duality of information security management: fighting against
predictable and unpredictable threats. J. Inf. Syst. Secur. 4(3) (2008)
29. Rossignoli, C., Mola, L., Cordella, A.: Reconfiguring interaction through the e-marketplace: a
transaction cost theory based approach. In: Dwivedi, Y., Lal, B., Williams, M., Schneberger,
S., Wade, M. (eds.) Handbook of research on contemporary theoretical models in information
systems, pp. 311–324. Information Science Reference, NY (2009)
30. Zardini, A., Mola, L., Vom Brocke, J., Rossignoli, C.: The role of ECM and its contribution in
decision-making processes. J. Decis. Syst. 19(4) (2010)
31. Lindkvist, L.: Knowledge communities and knowledge collectivities: a typology of knowledge
work in groups. J. Manag. Stud. 42(6), 1189–1210 (2005)
32. Handley, K., Sturdy, A., Fincham, R., Clark, T.: Within and beyond communities of practice:
making sense of learning through participation, identity and practice. J. Manag. Stud. 43(3),
641–653 (2006)
33. Alvino, F., Agrifoglio, R., Metallo, C., Lepore, L.: Learning and knowledge sharing in virtual
communities of practice: a case study. In: D’Atri, A., Ferrara, M., George, J.F., Spagnoletti,
P. (Eds.) Information technology and innovation trends in organizations. Springer, Berlin
(2011)
34. Francesconi, A., Bonazzi, R., Dossena, C.: Solar system: a new way to model online
communities for open innovation. In: Spagnoletti, P. (Ed.), Organizational change and
information systems. Lecture Notes in Information Systems and Organisation, vol. 2,
pp. 205–214 (2013)
35. Schiavone, F., Metallo, C., Agrifoglio, R.: Extending the DART model for social media. Int.
J. Technol. Manag. 66(4), 271–287 (2014)
Part I
ICT, Organizational Innovation
and Change
A Methodology for the Impact Assessment
of a g-Cloud Strategy for the Italian
Ministry of the Economic Development

Francesca Spagnoli, Francesco Bellini and Alessandra Ghi

Abstract The paper has the objective to provide a methodology for the
socio-economic, technological and environmental impact assessment of a Cloud
Computing strategy for the Italian Ministry of the Economic Development and
more specific at the service of the Department for Communications. In order to
develop a detailed and tailored model for implementing the g-Cloud strategy, we
analyse the current services and functions performed by the Department for
Communications of the Italian Ministry of the Economic Development, showing
the current ways of managing information flows within and outside the adminis-
tration. Starting from the available background analysis on the current state of the
art of the adoption of g-Cloud services in Europe and USA, we provide assump-
tions and hypotheses for the definition of the g-Cloud Strategy. We then compare
the requirements provided by the General Directorates of the Department for
Communications of the Italian Ministry of the Economic Development in order to
validate the hypotheses previously defined. By reviewing the approaches for the
impact assessment available from literature review, we define the best effective
methodology for assessing the potential impacts of g-Cloud strategies. The meth-
odology considers four areas of impact: economic, social, legal and environmental
impacts. For each area of impact we identify specific indicators for the assessment
of efficiency and effectiveness of Cloud Computing initiatives in the Italian PA that
have been validated by a set of Cloud Computing experts.

Keywords g-Cloud computing 


Impact assessment  Methodology  Italian
Ministry of the Economic Development

F. Spagnoli (&)  F. Bellini  A. Ghi


Università Degli Studi Di Roma La Sapienza, Rome, Italy
e-mail: francesca.spagnoli@uniroma1.it
F. Bellini
e-mail: francesco.bellini@uniroma1.it
A. Ghi
e-mail: alessandra.ghi@uniroma1.it

© Springer International Publishing Switzerland 2016 11


C. Rossignoli et al. (eds.), Organizational Innovation and Change,
Lecture Notes in Information Systems and Organisation 13,
DOI 10.1007/978-3-319-22921-8_2
12 F. Spagnoli et al.

1 Analysis of g-Cloud State of the Art in Europe, in Italy


and in the U.S.A

1.1 European Cloud Strategies for the Public


Administration

The European Economic and Social Committee, on January 20, 2011, decided to
draw up an opinion on the subject: “Cloud Computing in Europe” [1], in accor-
dance with art. 29, paragraph 2 of the Rules of Procedures. Based on Europe 2020
strategy [2], and in particular on the Digital Agenda, the European Economic and
Social Committee (EESC) primarily aimed to gather and share experiences
developed by stakeholders and the market in the Cloud Computing field. The
opinion had also the objective of formulating a series of recommendations to
encourage Europe to position itself at the head of this promising field, with the help
of leading companies. The opinion highlighted potential economic benefits and
weaknesses of the Cloud Computing technologies, which are mainly due to a lack
of maturity. With reference to the economic model of Cloud infrastructures, the
Economic and Social Committee supported the following elements as the most
relevant for the full development of the economic model: a larger number of
potential users, the sharing and optimization of resources, the users mobility, the
easy, flexible and transparent integration of the technical components, the distri-
bution of costs throughout the complete life cycle of the technology, the focus on
the core business and the growth opportunities offered by the creation of new fields
of activity. Instead, at European level, the weaknesses of Cloud technologies are
mostly related to the lack of a core governance structure, the multiplicity of reg-
ulations, the lack of reference points to support the users to evaluate the potential
risks, the fragility and the saturation of internet and servers, the risks related to
outsourcing and relocation of data and processes in other countries with a different
legal system, the complexity of the contracts available. However, the European
Union understands the importance of Cloud Computing strategies in order to
operate on a promising and strategic market. With specific reference to the adoption
of Cloud Computing in the Public Administration, the Committee states that these
technologies are fully legitimized in the general austerity context, as they do not
require huge initial capital investments. Furthermore, public investments could
generate a leverage effect by encouraging private national and European telecom-
munications operators to invest in Cloud Computing technologies.

1.2 U.S.A g-Cloud Strategy

Vivek Kundra, CIO of the US Government, is the creator of the Federal Cloud
turning point [3], a first step for the technological modernization process that will
generate greater efficiency and transparency on the US government. Kundra is the
A Methodology for the Impact Assessment of a g-Cloud Strategy … 13

head of strategic IT investments plans with a federal budget of over $70 million/
year. Hence, the US government stands as the largest buyer of technology in the
world. The US Government has developed a “Federal Cloud Computing Strategy”
on 8 February 2011 with the aim to provide guidance to federal agencies on
complying with the Cloud first strategy. The choice to turn to Cloud Computing
technologies has been strongly supported by Obama, in order to reduce the gov-
ernment operation costs and make it safer, open and flexible. The expected number
of IT services that will migrate to the Cloud are about $20 billion out of $80 billion
broken down by individual agency, mainly based on private Cloud deployments.
The decision framework for the migration of the US Government to Cloud tech-
nologies is based on three processes: selection, to analyse and identify the IT
services to move and the time; provision to aggregate the demand, ensure inter-
operability and integration with IT portfolio, provide security contracts, repurpose
legacy assets and redeploy freed resources; management, to shift IT mindset from
assets to services, build new skill sets, monitor the compliance of the provider with
SLAs and re-evaluate vendor and service models.
Within these processes, Kundra has first identified the IT operations that had not
produced relevant results, to redirect $25 million to more profitable activities. The
Cloud transformation has not only affected the technologies, but also the cultural
and organizational processes of the US government. The processes started by the
US government arise coherently in the broader dematerialization strategy and
encourages the creation of new service delivery models. Within this context, it will
be developed the Data.gov site that will gather and make available the information
of the US government. Actually, the US government budget for the migration to
Cloud Computing technologies is of $19 billion dollars. The US Government Cloud
Computing strategy is aimed at changing how the institution thinks.

1.3 Italian Cloud Initiatives for the Public Sector

According to the Global Cloud Computing Scorecard [4] developed by the Business
Software Alliance, which drafted a global ranking of countries prepared to deploy
and use Cloud technologies, Italy is third in Europe and sixth in the World. In the
first positions of the Global Cloud Computing Scorecard are Japan, USA, France,
Germany and Australia. The research was based on several indicators, mainly related
to the quality of infrastructures and effectiveness of the Italian legislations in terms of
Cloud Computing cybercrime and privacy security. A negative element of the Italian
government for the full adoption of Cloud Computing technologies is constituted by
the slow bureaucracy, for instance legislation on the digital signature while is in line
with the international standards, often encountering problems in its application.
Unfortunately, in terms of adoption of Cloud Computing technologies for the Public
Administration, we have no positive data. Indeed, Italy is in delay in adopting
infrastructures, platforms and applications residing on the network, rather than on
corporate servers. In addition to the physiological delay related to the decision, there
14 F. Spagnoli et al.

is also the lack of a central governance. Compared with the growing attention that the
US federal government is devoting to the optimization of technological resources,
and the adoption of Cloud Computing technologies in Northern Europe, in Italy we
are far behind. The Italian Cloud and ICT as a Service Observatory of the
Polytechnic Institute of Milan [5] analysed since 3 years ago the evolution of Cloud
Computing in Italy through an empirical ad hoc research involving 35 Public
Administration, in-house companies validated the results of the surveys. According
to the Observatory, the Cloud infrastructure could be very useful for the Italian
Government in order to reduce costs and the inefficiencies of the current systems to
move to a new IT paradigm, to lower the critical mass of investments and skills
required, also allowing the smaller Governments to access and benefit from a
widespread digitisation. However, the analysis of the current technological situation
of the Italian Government shows a fragmented infrastructure that is inefficiently
handled. Looking at the current Data Center scenario, an important source of cost
and complexity is involved in managing the IT infrastructure, as the central
Government has 1033 Data Centers, plus 3000 Data Centers of local Governments.
The hardware of these Data Centers is managed unevenly and is used only for a
fraction, with an use of the virtualisation techniques only for a 25 % of its potential.
Consequently, the IT spending, although not high in absolute terms, is inefficient and
is hiding management costs approximately per 1 billion euro a year in human
resources management and the energy expenditure is estimated at 270–300 million
euro. According to the Italian Observatory, by following a rationalisation scenario
and considering these three main aspects, in five years, the Italian Government could
achieve a saving of 3.7 billion euro. Moreover, if the local Governments will start to
use the virtualisation techniques more widely, they will overcome the 1 server—
1 application paradigm and benefits could grow to 5.6 billion euro. The process of
rationalisation of resources through the Cloud infrastructure will require a set of
actions, including the most important that is the rationalisation of the infrastructure
(Data Centers) to guarantee medium-term returns easy to quantify, removing the
scepticism and pushing the Governmental actors to action. However in Italy, during
the 2012, several positive initiatives were initiated for the adoption of g-Cloud
infrastructures. For instance, one best practice is related to the Health sector, where
the debate is more active. Indeed, several Local Health Authorities (ASL) tested
online payments solutions and are adopted Cloud technologies (ULSS of Asolo).

2 Services and Functionalities of the Department


for Communications of the Italian Ministry
of the Economic Development

In order to correctly analyse the services and functionalities of the Department for
Communications it is required to identify the different organization charts of the
Department, which is constituted by 3 General Directorates, the Institute of
A Methodology for the Impact Assessment of a g-Cloud Strategy … 15

Communications and Information Technology (ISCOM), 16 Territorial


Inspectorates of the Ministry of the Economic Development (Abruzzo e Molise,
Calabria, Campania, Emilia Romagna, Friuli-Venezia Giulia, Lazio, Liguria,
Lombardia, Marche e Umbria, Piemonte e Valle D’Aosta, Puglia e Basilicata,
Sardegna, Sicilia, Toscana, Trentino—Alto Adige, Veneto) and the Staff Offices of
the Head Department. A detailed description of activities developed by the 3
General Directorates and the High Institute of Communications and Information
Technology (ISCOM) is required in order to identify the services provided.

2.1 General Directorate for Electronic Communication


Services and Broadcasting

The General Directorate for electronic communication services and broadcasting is


mainly aimed at granting authorizations for the sound and television system, and
more generally for all electronic communications services acquiring royalties
relating to concessions, providing grants to support publishing, monitoring on
progress obligations in the electronic communications sector and in particular of the
RAI contract service, controlling premium rate services, participating in the work of
national and international organisations, as well as the formulation of legislative
and regulatory proposals in the field of communications.

2.2 General Directorate for Planning and Management


of the Radio Spectrum

The General Directorate for planning and management of the radio spectrum is
aimed at allocating frequency band to the different radio-electrical companies,
managing the allocation of frequencies to station of different services, protecting
duly authorized services through the monitoring and control of the radio spectrum.
The Directorate manages the radio spectrum through a coordination and technical
assistance process for the resolution of specific problems with the collaboration of
the Regional Inspectorates and the National Center for the control of radio fre-
quency emissions, that is a body set up within the International Telecommunicaton
Union in the field of Communications.

2.3 General Directorate for the Regulation of the Postal


Sector

The General Directorate for the regulation of the postal sector establishes the
conditions, prices and tariffs of services, defines the quality level of the postal
16 F. Spagnoli et al.

service and verifies the compliance of Poste Italiane spa, responsible for the pro-
vision of the service, applying penalties for breaches. This Directorate also sets the
“Program Contract” with Poste Italiane spa. in order to regulate the relationship
between the parties, ensures the compliance with the obligations of the service
provision and participates in the work of international and European organisations.

2.4 Institute of Communications and Information


Technology (ISCOM)

The Institute of Communications and Information Technology (ISCOM) is taking


care of the experimentation and research, technical support to companies, institu-
tions and citizens through the testing of activities, data and network security, check
for ICT services quality, training, specialisation and dissemination in the field
of electronic communication, regulation and standardisation. The School of
Specialisation in Telecommunications is also part of the Institute of
Communications and Information Technology.

3 A Model for Assessing the Impact of Cloud


in the Italian PA

According to the commonly agreed approach [6], the methodology for the impact
measurement we are proposing, is focusing on the inputs, outputs, outcomes and
impacts approach, where:
• Inputs are the investments made in, or the resources required to produce a
product or develop/undertake an activity.
• Outputs are the products or services provided (e.g. number of services created,
papers published, events held, etc.).
• Outcomes are the immediate changes resulting from an activity—these can be
intentional or unintentional, positive or negative (e.g. employment, increased
usability and personalisation).
• Impacts are the net difference made by an activity after the outputs interact with
society and the economy (e.g. higher and easier access to cloud services in new
member countries leading to the increment of local human resources) (Tables 1,
2 and 3).
The methodology presented in this chapter is based on a quali-quantitative
approach to impact assessment and builds on the principles of the Cost-Benefit
analysis [7, 8] and of the Multi-Criteria analysis [9]. These two methods are seen as
complementary to one another, as they help framing both impacts that can be
represented in a monetary form, as well as impacts that are better described in
A Methodology for the Impact Assessment of a g-Cloud Strategy … 17

Table 1 Economic impact Economic indicators


indicators
Improve service/product/system quality
Reach more users
Improve the access to large amounts of data. Improve the
possibility to exploit large amounts of data (more efficient data
analysis)
More efficient data exchange
Improve scalability
Improve reliability
Improve recoverability
Improve portability
Reduce the time needed to deliver a service
Ability to better target users/beneficiaries’ needs
Reduce hardware costs
Reduce connectivity costs
Reduce maintenance cost
Lower software development costs
Cost reduction due to increment in software reusability
Cost reduction due to improvement of test-deploy-rework cycle
management
Cost reduction due to less process break/system failure
Cost reduction due to energy saving
Indicators of cloud ROI cost ratios
Availability performance compared to current service levels
CAPEX cost on premise ownership versus cloud
OPEX cost for on-premise ownership versus cloud
Cost effective cloud workload utilization
Percentage of IT asset workloads using cloud
Indicators of cloud ROI profitability
Rate of new product market acquisition
Indicators of cloud ROI saving models
Rate of time change of TCO reduction by cloud adoption
Rate of cost change of TCO reduction by cloud adoption
Increase in provisioning speed
License cost reduction from cloud adoption
Indicators of “perceived efficiency”
Content retrieving time-saving
Time savings accessing or using the service

non-monetary terms (such as social or technological impacts). The combination of


the two methods enables us to not only consider a wide spectrum of impacts, but
also to combine variables that need to be expressed in different ways. The meth-
odology considers four areas of impact: Economic, Social, Legal and
18 F. Spagnoli et al.

Table 2 Legal impact Indicators of “legal offered efficiency”


indicators
Improve transparency level of the conditions for the provision
of the cloud services
Improve fairness of the conditions for the provision of the cloud
services
Ensure security of personal data
Ensure fair collection of personal data
Ensure fair processing of personal data
Ensure fair transfer of personal data
Ensure confidentiality of PA data
Assume liability for loss of data
Assume liability for failure to provide the cloud services
Assume liability for defective provision of the cloud services
Minimize violations of IPRs
Frequency of defective responses (SLA response error rate)
Indicators of l legal “perceived efficiency”
Transparency of conditions of use of the cloud services
Fairness of conditions of use of the cloud services
Easy procedures for accessing personal data by data subjects
User friendly procedures for exercising rights by data subjects
Notice-and-take down procedures to notify violations of IPRs

Environmental. For each area of impact we identify specific indicators to be vali-


dated by the experts in the next chapters for the assessment of Cloud Computing
initiatives in the Italian PA. The process for the development of the methodology
for assessing of the socio-economic, environmental and legal Impact of the Cloud
Computing model for the Italian Public Administration is based on 4 steps:
1. Background analysis and literature review.
2. Definition of impact indicators.
3. Validation by a set of experts of the impact indicators.
4. Testing of the methodology on current available initiatives in the Italian PA.
With reference to the efficiency, we identified some indicators to be measured
quantitatively and to be further expressed in monetary terms. The assessment of the
efficiency will be made in two different ways and using two different viewpoints:
the first, will be called “offered efficiency” and will be calculated by analysing
the technological advances brought by the Cloud Computing infrastructures. The
second, will be called “perceived efficiency” and will be calculated by asking the
stakeholders and end-users to describe which are the benefits they experience by
using the service/product offered by the PA through Cloud Computing infrastruc-
tures. Starting from the literature review and the previous considerations, we
identified the following list of indicators for assessing the social, economic, legal
and environmental impacts of Cloud Computing initiatives/projects in the
A Methodology for the Impact Assessment of a g-Cloud Strategy … 19

Table 3 Technical SOA Technical Indicators


impact indicators
Robustness: any system must be capable of withstanding errors
which should not affect system stability
Security and confidentiality
Extensibility: the system must allow support for a variable
number of users
Integration: the system must have the ability to communicate
with other systems that they supported SOA
Management and provisioning: the system must ensure its
management and monitoring of implemented services.
Based on open standards
Interoperability: the ability of a system or product has to work
with other systems or products without special effort
Portability: it is possible that the application may be available
on all machines regardless of the system architecture
Availability: to be freely available or that it is ready for use or
used
Persistent: ensure the ability to store information of the system
to return to the previous or retrieve information
On time: the response of the system should be given within an
appropriate timeframe
Reliable or deterministic: the system should give the same result
while making a deal with the same operators
Transactional: the system should be able to return to its state
before the transaction started
Modifiability: is about the ease with which a change can be
made to application architecture.
Support for extended web services protocols like
decentralization, security, flexibility, ubiquity or extensibility

Italian PA. Below in the following tables we provide a list of indicators that could
be used in assessing outputs efficiency (Tables 4 and 5).
Once the indicators are defined for measuring the impacts of the identified
assessment categories, the third assessment step consists in measuring the related
costs and benefits. In consistence with the principles of cost-benefit analysis, the
evaluation of the benefits generated by a project/strategy may be evaluated by
identifying the willingness that the society has to pay for obtaining that positive
impact. The final assessment of a project/strategy efficiency, will be made by using
the following indicators:
• Economic net present value (ENPV*) perceived: the difference between the
discounted total economic benefits and costs. The benefits will be evaluated as
(1) the total willingness to pay of the users (i.e. the average willingness to pay of
the users multiplied for the total number of users), (2) the average time savings
(in hours) per user multiplied for the average hourly salary of
20 F. Spagnoli et al.

Table 4 Environmental Environmental impact indicators


impact indicators
User count: number of provisioned users for a given application
Server count: number of production servers to operate a given
application
Device utilization: computational load that a device (server,
network device or storage array) is handling relative to the
specified peak load
Power consumption per server: average power consumed by a
server
Power consumption for networking and storage: average power
consumed for networking and storage equipment in addition to
server power consumption
Data center power usage effectiveness (PUE): defined as the
ratio of the total data center power consumption divided by the
power consumption of the IT equipment.
Data center carbon intensity: amount of carbon emitted to
generate the energy consumed by a data center, depending on
the mix of primary energy sources (coal, hydro, nuclear, wind,
etc.) and transmission losses.

researchers/workers/users multiplied for the total number of users. The costs


corresponds to the total budget of the project.
• B/C* ratio perceived, i.e. the ratio between discounted economic benefits and
costs (as above).
• WTP/C* the Willingness to Pay is evaluated by the stakeholders and end-users
and it is compared to the costs of the project. The Willingness to Pay of the user
indicates how much a user is willing to pay for that service. If the total
Willingness to Pay (WTP) calculated by multiplying the average declared by the
users to the number of total users indicated in the project scenario) is greater
than the cost of the project, i.e. the ratio WTP/C* > 1, it means that the services
can be commercially sold on the market or however he can assess the mar-
ketability of this service. Otherwise, WTP/C* < 1 means that most likely the
project can sell such a service and so it would be necessary to investigate any
alternative business models or at least think about mixed business models
(finance and marketing).

3.1 Results from the Interviews with the Experts of Cloud


Computing for the Italian Public Administration

During a first round of interviews with a set of experts, the indicators previously
mentioned were validated and consistently reduced, in order to provide to a second
group of experts, only the indicators that can be effectively relevant for the analysis
A Methodology for the Impact Assessment of a g-Cloud Strategy … 21

Table 5 Social impact Social impact indicators


indicators
Changes in the volume of digitally available cloud resources
No. of services offering customisable access to content
Composite index of usability
Composite index of personalization
Composite index of expected impacts on improvements in way
citizens experience PA online services
Support knowledge transfer
Make available high-quality knowledge/information to citizens
Support democratic processes/democratization
Enable diversity and individual expression
Make highly innovative services available to citizens
Develop services that will positively impact on citizens’
everyday life
Reduce the digital divide
Flexibility for personalisation on a large scale/high interface
adaptability
Reduce the work of the users (more operations will be
automated)
Improve the way in which users communicate and collaborate
with each other (the quality of the collaboration)/facilitate social
interaction
Improve trust among PA target users
Improve citizens’ trust in public administration
Support network creation/collaboration of enterprises working
for the PA
Support network creation/collaboration among citizens

of a g-Cloud strategy for the Italian PA. Interviews were conducted with fourteen
major experts in the Cloud Computing field for the Italian PA to explore the Cloud
possible adoption process and outcomes for the Italian Public Administration. The
open-ended interviews are one of the approaches used among researchers, and an
increasing number of researchers are using multi-methodology approaches to
achieve broader and often better results. Interviewing is currently undergoing not
only a methodological change but a much deeper one, related to self and other [10].
We have structured each interview on six open-ended questions. The experts have
been selected according to their experience and knowledge of national and inter-
national experiences of Cloud Computing services adoption both in private and
public sectors, so they can effectively provide for a real and correct analysis. The
interviews were conducted in different ways: on skype, face-to-face, by phone and
by e-mail. In this paragraph we will focus on the analysis of the results of the six
open-ended questions, presented in a single section. The experts were invited to
express an opinion in terms of assessment of the benefits and legal issues, mana-
gerial and operational impacts of Cloud strategies for the Italian PA. With regard to
22 F. Spagnoli et al.

the first question, about the possibility of Cloud Computing to be a winning


resource from an economic, operational and social point of view in the future of the
Italian PA, all the experts answered affirmatively, pointing out a reduction in
operating costs, especially in the short term on the condition of setting real effective
implementation of Cloud strategies aimed at the control of process and data
security. With reference to the second question, about the real and effective benefits
of Cloud Computing for the Italian PA, experts have pointed out the relevance of
cost reduction, as incurred for the effective use, the accessibility to all, the platform
standardization, the increased data security and the continuous monitoring and
improvement of the overall processes of service delivery to users. The third
question was related to the possibility of a Cloud infrastructure to increase the legal
and personal data management issues of citizens using the Public Administration
services and focus on the measures that may be adopted to reduce such risks. All the
experts have highlighted that the localization of the Cloud issues and the protective
measures to be taken could improve the adoption of a specific legislation relating to
data protection and the use of structuring agreements between the cloud services
provider and the Public Administration, to protect the privacy. With reference to the
fourth question, about the possibility of Cloud Computing to increase the opera-
tional risks (e.g., disaster recovery) of the Italian PA and what measures could be
adopted, most of the experts answered negatively. However, experts have said that
any hypothetical operational risks arising from the high levels of integration are
waged through distributed cloud architecture, using a modeling approach to the
Cloud for the provision of common services, also highlighting the importance of
service delivery contracts and the contractors professional skills. With regard to the
fifth question, about the possibility for the Italian PA to adopt a Cloud strategy in a
short time, there is an uncertainty among experts. They have answered that although
the intention of adopting Cloud Computing in Italian PA exists, however the current
regulatory, contractual barriers and economic barriers (such as the shift of costs
from capital costs to operating costs), and the transition costs on the expiry of the
contractual delivery of IT services, every 3–5 years are slowing its adoption in the
PA. With reference to the sixth question, on which typology of Cloud infrastructure
to implement, if centralized or not, the experts have suggested the possibility for the
government to manage a unified and centralized cloud, based on a distributed
private infrastructure. Other assumptions made by the experts concerned the future
convergence of different private Clouds into a single public Cloud, or the creation
of a network of regional data centers on which to consolidate the municipalities data
centers, also implementing cooperation policies with private Cloud service pro-
viders. The “Community” Cloud model based on the sharing of resources and
services could be a model for the future.
A Methodology for the Impact Assessment of a g-Cloud Strategy … 23

3.2 Results from the Validation of the Indicators


Through an Online Survey

The survey was developed through a web platform called www.surveygizmo.com


and was online from the 2nd of June until the 14th of July. The survey has been sent
by email to the major experts in the Cloud Computing field for the Italian Public
Administration. It was really difficult to identify experts of Cloud Computing
infrastructures for the Italian PA, for this reason we decided to focus our research
only on experts that could effectively provide real and correct analysis of the
indicators. The questionnaire has been sent to more of 20 experts in the field and
was included in the ANCITEL web news at the following link http://portale.ancitel.
it/evidenza.cfm?i=686. The survey was composed of sixteen questions, of which
two questions were related to general information, six questions were open ques-
tions (the same included in the interviews) and eight questions were closed ques-
tions proposed as a Likert scale (1–5), focusing on the validation of the potential
indicators for evaluating an Italian g-Cloud strategy. In this paragraph wefocus on
the analysis of the results of the 8 closed questions. For this analysis we considered
the values that achieved 4 or more than 4 in the survey. According to the experts,
with reference to the economic indicators, the main benefits for the Italian PA were
related to: improving the quality of products/services/systems, reducing the time
needed to develop a product/service, improving scalability, practices for data
exchange, reliability and performance in comparison to the current levels of service.
In terms of costs indicators for the Italian PA, the Cloud Computing infrastructures
can reduce costs due to energy savings, maintenance and hardware costs. With
reference to the relevance of efficiency indicators perceived by the users of the PA,
the experts supported that the most relevant indicators for the Italian g-Cloud
strategy are the reduction of the time required for the storage of digital data and the
time required for data recovery. In terms of offered efficiency related to legal issues,
the experts stated that the most relevant indicators are: the ability to ensure the
proper rescue and transfer of personal data and the availability of the error rate of
responses (SLA). However, less important but still valuable is the need for an
improvement in the fairness of the conditions for the supply of services, for
ensuring the confidentiality of the data of the PA and the unequivocal identification
of the liability part in case of data loss. With reference to the technical indicators
related to a potential Italian g-Cloud, the most relevant are: robustness of the
system, extensibility of the system, management and monitoring of implemented
services, availability of the system, ability to provide answers quickly, reliability of
the system and transactional system. The experts sustained that a Cloud Computing
infrastructure for the Italian PA will not have a huge relevance in terms of social
impacts, the only indicator relevant for this specific analysis is the one who eval-
uated the ability of a g-Cloud infrastructure to offer more innovative services that
can positively impact on the lives of the Italian citizens. In terms of environmental
impacts, the experts supported that this field is very relevant for an Italian g-Cloud
strategies, as these infrastructures can really develop relevant environmental
24 F. Spagnoli et al.

benefits. In order to evaluate the impact of a g-Cloud infrastructure on the envi-


ronment, the experts sustained that the following indicators can effectively con-
tribute to this analysis: number of users of each application, number of servers
required to perform each application, average power consumed for networking and
storage devices, Data Centre Power Usage Effectiveness, relationship between the
consumption of energy of a traditional infrastructure and of a Cloud infrastructure.

4 Conclusions

It is necessary to reorganize the processes to increase productivity and improving


the performance of public services by reducing also costs. In addition, Cloud ser-
vices represent an effective and inexpensive way to enable e-Government services
to be efficient, transparent and to improve participation, sharing and interopera-
bility, in order to better meet the needs of the Italian citizens. In small Public
Administrations it is difficult to implement IT infrastructures, because of the long
leading times and the complex processes related to the acquisition of the infra-
structural components. Cloud infrastructure can solve these issues, as they reduce
the need to build and manage the IT infrastructure internally and the time of
acquisition of the technology [11]. The transformation process will not be instan-
taneous, the results of this technological challenge can only be achieved through a
strong and consistent long-term Roadmap to be developed in close collaboration
between three major players: the Public Administration, citizens and the IT
industry, which will provide secure and comprehensive services tailored according
to the evolving needs of the Public Administration. The Public Administration will
start to use a private Cloud infrastructure, but only by implementing an hybrid
model, allowing to provide a homogeneous set of applications anywhere, anytime
and from any device, the Italian P.A. can completely benefit from the advantages of
Cloud solutions.

References

1. European Commission: Communication from the commission to the European parliament, the
council, the European economic and social committee and the committee of the regions. In:
Unleashing the Potential of Cloud Computing in Europe. http://eur-lex.europa.eu/legal-
content/EN/TXT/?uri=CELEX:52012DC0529 (2011)
2. European Commission: Europe 2020: A European strategy for smart, sustainable, and
inclusive growth. http://ec.europa.eu/europe2020/europe-2020-in-a-nutshell/index_en.htm
(2010)
3. House, W., Kundra, V.: Federal cloud computing strategy. https://www.dhs.gov/sites/default/
files/publications/digital-strategy/federal-cloud-computing-strategy.pdf (2011)
4. Business Software Alliance: Global cloud computing scorecard, country report Italy. http://
portal.bsa.org/cloudscorecard2012/assets/pdfs/country_reports/Country_Report_Italy.pdf
(2012)
A Methodology for the Impact Assessment of a g-Cloud Strategy … 25

5. Corso, M., Mainetti, S., Piva, A.: La via del cloud per la rivoluzione digitale nella PA. www.
agendadigitale.eu (2012). Accessed 20 Nov 2012
6. UNDP Evaluation Office: Guidelines for outcome evaluators. http://web.undp.org/evaluation/
documents/HandBook/OC-guidelines/Guidelines-for-OutcomeEvaluators-2002.pdf (2002)
7. Boardman, A.E.: Cost-Benefit Analysis: concept and practice, 3rd edn. Pearson Prentice Hall,
Upper Saddle River (2006)
8. Brent, R.J.: Applied cost-benefit analysis, 2nd edn. Edward Elgar, Cheltenham (2007)
9. Köksalan, M., Wallenius, J., Zionts, S.: Multiple criteria decision making: from early history
to the 21st century. World Scientific, Singapore (2011)
10. Fontana, A., Frey, J.: Interviewing: The art of science. In: Denzin, N.K., Lincoln, Y.S. (Eds.)
Handbook of Qualitative Research, pp. 361–376. Sage Publications, Thousand Oaks (1994)
11. Digit, P.A.: Raccomandazioni e proposte sull’utilizzo del cloud computing nella pubblica
amministrazione. http://archivio.digitpa.gov.it/sites/default/files/forumPA2012/Raccomandazioni
Cloud.pdf (2012)
Italy’s One-Stop Shop: A Case
of the Emperor’s New Clothes?

Walter Castelnovo, Maddalena Sorrentino and Marco De Marco

Abstract The setting up of municipal One-Stop Business Shops (SUAPs) plays a


vital role in the Italian legislator’s work to simplify government relations with
business and industry. The paper analyzes the outcome of the SUAP simplification
programme launched in 1998 and, using secondary data sources, shows that not all
the targets have been achieved. The paper’s findings indicate that the shortfall
clearly can be attributed to the fact that the entire SUAP-centred simplification
process has suffered from legislative overkill while neglecting to address the
all-important practical side of implementation, and the impact of the new laws on
the behaviour of the actors and the decision makers that populate the different levels
of the country’s PA.

Keywords Simplification  e-Government  One-stop government 


Administrative reform

1 Introduction

Governments are introducing new e-government services every day and bench-
marking is an important mechanism for keeping track of developments [1]. Italy’s
international ranking on the high-income countries’ e-government ladder has never
left the lower rungs in past years, but this trend was inverted in 2010–2011 when

W. Castelnovo (&)
University of Insubria, Varese, Italy
e-mail: walter.castelnovo@uninsubria.it
M. Sorrentino
University of Milan, Milano, Italy
e-mail: maddalena.sorrentino@unimi.it
M. De Marco
International Telematic University UNINETTUNO, Rome, Italy
e-mail: marco.demarco@uninettunouniversity.net

© Springer International Publishing Switzerland 2016 27


C. Rossignoli et al. (eds.), Organizational Innovation and Change,
Lecture Notes in Information Systems and Organisation 13,
DOI 10.1007/978-3-319-22921-8_3
28 W. Castelnovo et al.

Italy was promoted by both the UN [2] and the EU, with the latter ranking it
significantly higher in its online services scorecard [3]. That advance was partly
thanks to the virtualization—to comply with Presidential Decree 160 of 2010—of
the One-Stop Business Shops (Sportelli Unici per le Attività Produttive or ‘SUAP’).
In fact, in tandem with the launch of other web-based services, the SUAP was
pivotal to the government’s policies for administrative simplification and to cut the
excessive red tape imposed on businesses, especially the small, medium and micro
enterprises (SMMEs). But while both the OECD and the EU recognized that Italy
had managed to reduce the administrative burdens on businesses and improve the
quality of regulation, which they considered essential to the country’s competitive
growth, and despite the fact that the virtualization of the SUAPs signalled the end of
a process of change that called for streamlining the PA’s relations with businesses
that began in 1998, the competitiveness indicators used to measure how simple it is
to set up and operate a business in Italy and the satisfaction of Italian SMMEs with
the PA’s delivery of services suffered a further decline.
To understand the reasons for this apparent paradox, this qualitative research
uses the Italian government’s attempt to introduce the One-Stop Business Services
and Information Shop as its case study. The aim of the paper is to answer the
following research question:
• Why is it so difficult to deliver the One-Stop Business Shop promise? That is, the
promise that citizens can get all the services they need under one physical or
virtual roof [4].
Bringing together three research strands, i.e., e-Government, Information
Systems (IS), and Public Management (PM) studies, the paper attempts to shed
light on the both the mechanisms that regulate the functioning of the One-Stop
Business Shop in Italy and the factors that influence its development. To address the
research question, the paper shows how the entire SUAP-centred simplification
process has suffered from legislative overkill, while the actual implementation
processes and the impact of the new laws on the behaviour of the actors and the
decision makers at the different levels of Italy’s PA have been ignored. The failure
to take account of the organizational aspects has, in turn, prevented a robust
evaluation of e-Government initiatives [5].
The remainder of the paper is organized as follows. After a brief review of the
literature on one-stop government and a description of the approach taken in this
article, Sect. 4 will analyse the implementation status of the One-Stop Business
Shop (or, to use the Italian acronym, SUAP), retracing how the programme was
developed in legislative ‘fits and starts’ from 1998 to 2010. Sections 5 and 6 use
secondary data sources to investigate the seeming contradiction between Italy’s
promotion in the international rankings and the fact that the SUAP laws have done
nothing to either raise the country’s economic competitiveness or reduce SMME
administrative burdens. In addition, the paper pinpoints and discusses several
problems that continue to prevent Italy’s One-Stop Shops from becoming fully
operational that even the latest legislative measures have failed to remedy. The
paper closes with Sect. 7, in which the finger of blame for many of the critical
Italy’s One-Stop Shop: A Case of the Emperor’s New Clothes? 29

aspects that prevent the SUAP programme from generating the expected benefits
can be pointed at the “innovation through legislation” approach, the same approach
that has stymied many of Italy’s PA reform programmes in the past 20 years [6].

2 Related Work

Regardless of its many guises and methods of implementation, the idea of one-stop
government is mainly grounded on the bundling and/or integration of public ser-
vices that can be accessed from a single point of contact; the re-design of the
services architecture and the service delivery from a citizen-centred viewpoint; and
the availability of multiple delivery channels, including the online channel.
Over the years, the scramble to define ‘one-stop government’ has been led by the
supranational organizations (i.e., the UN, OECD, World Bank, EU) and the large
consulting firms. Meantime, one-stop government has captured the attention of the
e-Government academic community, which sees it as pivotal to each e-Government
system [7–12].
More recently, the topic of one-stop government was taken up by IS scholars,
above all, to delve into specific conceptual aspects, especially:
• public agency interoperability/integration to support the execution of
inter-organizational workflows, as required by the single-point-of-contact idea
itself [1, 13–18];
• the study of inter-organizational transformation/innovation processes and
reengineering process models from the perspective of inter-organizational
cooperation between different public agencies [19–25];
• the study of business and service delivery models, particularly in terms of the
single point of contact’s delivery of online services [26–29].
One-stop government and the integration of citizen services has been amply
debated in the PM literature since the 1990s [30–32], especially from the
client/user-based perspective [33–36]. More recently, significant interest was
revived by the public administration reform discourse of the post-New Public
Management (NPM) era [37–43], which, in particular, sees the one-stop government
model as an example of politically driven centralization to rectify problems of
service delivery coordination by vertically reintegrating devolved and outsourced
service delivery functions into new centrally controlled service agencies [4].
The three research fields of e-Government, IS and PM have investigated the
one-stop government issue by zooming in on the various aspects that pertain to their
discipline, even though the concept of one-stop government is multidimensional,
traversing, as it does, several domains: from governance and inter-organizational
cooperation to the reengineering of business processes and ICT-based organizational
transformation. This multifaceted issue therefore calls for an interdisciplinary
approach to the study of one-stop government models, which, as far as the authors
can ascertain, the literature has not yet developed. The paper aims to narrow that gap.
30 W. Castelnovo et al.

3 Research Strategy

The empirical study of organizational change and transformation requires that the
analysis of the content and process of change should not be abstracted from the
context that gives that change form, meaning and dynamic [44]. As a result, to
interpret the true state of play it is necessary to take a dual approach that marries
attention to agency with the recognition that organizations are contextually
embedded phenomena with ‘deep structures’ that are frequently reproduced [44].
To shed some light on what is happening in Italy’s One-Stop Business Shop
domain and why the endeavour has produced disappointing results to date, the
article assumes that the process to implement the PA’s complex reform programme,
which involves various constituencies, has been strongly conditioned by the tension
between the typically ideal model (i.e., the online One-Stop Shop [11]) and the
constraints imposed by the structural and cultural features of the national admin-
istrative system. Hence, institutions are assigned a focal analytic position as an
explanatory variable of the observed outcomes [44, 45].
A complete evaluation effort would have meant conducting an in-depth and
rigorous analysis in terms of scope and methods: “evaluations of comprehensive
reforms are likely to require both quantitative and qualitative evidence” [3, 46].
Therefore, in line with the explorative design adopted, the selected evidence used
here focuses on specific features of the reform package, its temporal evolution and
the perceptions of just one category of stakeholders, i.e., the Italian SMMEs. The
evidence includes some authoritative secondary sources of information that sys-
tematically photograph the country’s SMME system and its business relations with
the bureaucratic machine. A historical data set is used to make a diachronic
interpretation of the phenomena in question.
The next sections document how Italy’s One-Stop Business Shop programme
has veered off the “ideal path” charted by the model proposed by Hogrebe et al.
[11].

4 Setting the Scene

The aim of the initial model used to implement the One-stop Business Shop (in
Italian, Sportello Unico per le Attività Produttive—SUAP) was to simplify the
Italian PA’s business authorization process [47, 48]. Law 447/1998 was the first
attempt to introduce the SUAP and called for each municipality to set up a one-stop
business services and information shop, either independently or through
inter-municipal cooperation. To streamline the business authorization procedures
and to give the entrepreneurs a single point of contact for expediting the require-
ments for the start-up, change of activity or closure of a business, the SUAP was
tasked with coordinating all the public agencies involved in the box-ticking process
(e.g., local healthcare authorities, fire brigade, provincial and regional governments,
Italy’s One-Stop Shop: A Case of the Emperor’s New Clothes? 31

regional environment authorities and other local agencies). However, the initiative
immediately came up against hurdles that prevented it from achieving its goal to
simplify and reduce the bureaucratic burden on businesses. This triggered a spate of
legislative interventions to raze those barriers.
The objective of Law 340/2000, which deregulated and abrogated specific laws
on matters that now came under the jurisdiction of the SUAP, was to compress the
business authorization timeframe. Law 229/2003 introduced the standard practice of
“tacit consent” or what is called the ‘Statement of Business Start-up’ (in Italian ‘DIA’
or ‘Denuncia d’Inizio Attività’), which Law 122/2010 then replaced with the
Certified Notification of Business Start-up (in Italian SCIA or Segnalazione
Certificata di Inizio Attività). These two interventions eliminated the lengthy wait for
authorizations, permits or licenses by introducing a system that enabled the business
owner to commence activity right away on submission of the DIA or SCIA.
The regulatory framework was further pruned in 2007, when Law 40 introduced
the Single Statement (‘Comunicazione Unica’ or ‘CU’) to enable a new company to
be set up in just one day. Moreover, Law 40/2007 made electronic transmission
mandatory for both company listing in the Register of Companies and for the
exchange of information and documents between the relevant public agencies.
Although this had an indirect impact on the SUAP, it played a major role in
embedding the principle that the PA and business should interact and communicate
using exclusively electronic means. That principle was fully incorporated into the
SUAP framework by Law 133/2008 to further refine the SUAP model by man-
dating both the online delivery of the full range of business services/information
and the electronic transmission of business applications. Law 133/2008 also
allowed for the reception of European Directive 2006/123/EC (the “Services
Directive”), which led to the enactment of Law 160 in 2010 and the launch of the
website www.impresainungiorno.gov as the national Single Point of Contact
(SPC) to give business users online access to information and to enable them to
complete their administrative procedures online.
Law 160/2010 was the last in the SUAP series and was enacted in 2010 to
impose SPC accreditation. This law set out the basic technological requirements
that the SUAPs had to comply with to qualify as full-fledged operators and gave
them a deadline of 1 January 2011. Law 160/2010 also forestalled any further
delays in the government’s SUAP mission by mandating that municipalities unable
to satisfy these requirements must delegate the running of the One-Stop Business
Shop to the local Chamber of Commerce, thus overriding the previous requirement
for the parties to enter a formal voluntary agreement.
Spurred by Law 160/2010, approximately 94.5 % of Italian municipalities had a
SUAP up and running in one of the three prescribed forms by June 2013, i.e., as a
directly managed municipal One-Stop Business Shop; as an inter-municipal
cooperation effort; or fronted by the local Chamber of Commerce. Decisive impetus
came from two of the law’s provisions: the obligation for the municipal SUAPs to
obtain national SPC accreditation; and the automatic transfer of the management of
the SUAP to the local Chamber of Commerce should the municipality fail to
comply with the 1 January 2011 deadline.
32 W. Castelnovo et al.

The current SUAP landscape thus offers two vistas [49, p. 164] on the one side
there are the Chamber of Commerce SUAPs, which give the service levels (the
same for the whole of Italy) and the operational levels (how many and which
electronic practices are managed by local area, economic activity, type of practice,
etc.) and, on the other, the SPC-accredited municipalities, which differ significantly
on both counts given that each player adopts different technical and organizational
solutions.
In fact, despite the clearly defined basic requirements, the SPC-accredited
municipalities have equipped their front- and back-end functions with the ICT
solutions deemed most appropriate for their particular organizational structure.
Clearly, this has created nationwide divergences in organizational geometry and the
use of non-standardized forms to comply with the same requirement.

5 Framing the SUAP Programme

The legislative trail left by the SUAP since 1998 was necessary to both introduce
further regulatory and procedural simplification and to set the SUAP on a more
technology-driven course, the idea being to ultimately transform it into a virtual
service centre that delivers information and services to business users via the new
digital technologies, the internet and the new media.
Italian Law 160/2010 was the catalyst need to turn the SUAP into a fully
connected One-Stop Business Shop that uses exclusively ICT to deal with business
applications, statements, reports and communications.
The online One-Stop Business Shop can be considered an advanced
e-Government service to all effects and purposes and, hence, a basic pillar of Italy’s
digitization policies that aim to implement the directives issued by the supranational
EU. In fact, by the end of 2010 the European Commission’s DG Information
Society’s annual e-Government benchmark [50] had promoted Italy in its European
ranking of online business services. In particular, the full online availability of the
Italian online business services surveyed by the report spurred the country to pole
position with 100 % availability versus 88 % in 2009. Moreover, Italy’s online
business services sophistication indicator (according to the parameters of the
European Commission’s 5‐stage maturity model: (i) information, (ii) one‐way
interaction, (iii) two‐way interaction, (iv) transaction, and (v) targetisation/automa-
tion, rose from 86 % in 2009 to 99 % in 2010. So it would seem that Law 160/2010
has effectively produced a positive result, at least for what concerns Italy’s drive to
establish e-Government.
However, those bright results were marred by the further decline of Italy’s
competitiveness indicators, i.e., those used to measure how simple it is to set up and
operate a business in Italy and the level of PA service delivery satisfaction of the
country’s Small, Medium and Micro-sized Enterprises (SMMEs), which make up
more than 90 % of Italy’s business landscape.
Italy’s One-Stop Shop: A Case of the Emperor’s New Clothes? 33

In the World Economic Forum Global Competitiveness Index, Italy’s ranking


slipped from 48th in 2009 to 49th in 2013, while its Institutional Pillar indicator,
which is a more direct measurement of the PA’s role, fell from 97th in 2009 to 102
in 2013. Conversely, Italy’s ranking in the World Bank Ease of Doing Business
Index (EDBI) has improved, up from 74th in 2009 to 65th in 2014, even though its
“Starting a Business” and “Dealing with Construction Permits” indicators, which
are more closely aligned with the Italian SUAP’s activities, have deteriorated sig-
nificantly, demoted, respectively, from 53rd and 83rd in 2009 to 90 and 112 in
2014. That decline shows that the SUAP programme has not yet had the desired
effect of persuading Italy’s local governments to provide more effective services to
support competitiveness and economic growth [51]. Despite the continual tweaks to
the SUAP legislation, the PA service satisfaction level of the SMMEs has remained
consistently low over past years. A recent report [52] reveals that the satisfaction
level was 4.0 in 2013 on a scale of 1–10 versus 4.4 in 2009. The SMMEs’ opinion
of the PA services issued in the past 3 years was equally negative, rated on a scale
of 5/–5 at 0.6 in 2009 and –0.5 in 2013. Lastly, despite the simplification achieved
by the online SUAP, it has failed to generate any administrative cost-saving benefits
for the SMMEs in their dealings with government. Indeed, these costs dented
SMME revenue significantly, accounting for 6.9 % in 2009 and 7.5 % in 2013 [52].
The only conclusion to be drawn from these results is that the SUAP programme
has failed completely in its mission to reduce the bureaucratic burdens on busi-
nesses. Moreover, the Italian SMMEs still consider bureaucracy and the weight of
the administrative burdens a major risk to survival, ranking it 8.5 on a scale of 1–10
[52].

6 Discussion

The One-Stop e-Government reference framework shown in Fig. 1 [11] can be used
to map the ideal ‘path’ to the status of full-fledged virtual service centre.
By making it mandatory for the municipalities to set up a SUAP, Law 447/1998
created the conditions for the transition from Administrative Organization
(AO) (Quadrant 1) to Service Center (SC) (Quadrant 2), while Law 133/2008 and
Law 160/2010 enabled its transition to Virtual Service Center (VSC) (Quadrant 4).
Nevertheless, as shown in Sect. 5, above, the efficacy of the SUAP programme
remains negligible if not zero, which raises the question of how to identify how and
why Italy’s effective route to a VSC has veered off the “ideal path” described by
Hogrebe, Kruse and Nüttgens [11].
It is only possible to make the transition from AO to SC by taking a user-centred
approach to the bundling of services and the simplification of procedures. In a
highly fragmented administrative context such as the Italian PA [53] it is necessary
to closely integrate/coordinate the bundling of services at the intra-organizational
level, i.e., all the offices involved in the delivery of a service, and at the
inter-organizational level, i.e., all the local agencies involved in the business
34 W. Castelnovo et al.

Fig. 1 One-stop e-government reference framework [11]

authorization process. Integration and coordination of the various offices and


agencies is a further basic condition for simplifying procedures, given that the
SUAP deals with processes that are essentially inter-organizational workflows.
But, while Law 447/1998 clearly set out the requirements for intra- and
inter-organizational integration/coordination, the functioning of the PA as rated by
the SMMEs indicates that the basic organizational prerequisites of the SC model
were never fully complied with.
PromoPa [52] uses the efficiency gap indicator to measure changes in the level of
SMME satisfaction with a specific aspect and the degree of importance attributed to
it. It therefore identifies those aspects that the SMMEs rate the highest in terms of
expected efficiency gains and, therefore, those that need to be prioritized in terms of
corrective actions. The efficiency gap in the simplification of procedures and in the
synergic organization of the various offices and agencies in 2009 (a good 11 years
after the enactment of Law 447/1998) was, respectively, 66.9 and 63.7 %, and had
still failed to make much headway in 2013, when the same indicators stood at,
respectively, 60 and 59 % [52].
This disappointing result can be attributed to two related aspects. On the one
side, the fact that Law 447/1998 was not accompanied by any kind of reorgani-
zation support measure, which especially affected the small municipalities that
account for 75 % of the total. On the other, the fact that responsibility for
streamlining the procedures and guaranteeing the coordination of the various local
agencies involved was given to the SUAP but without giving it the authority needed
to ensure compliance.
The transition of the SUAP from SC to VSC called for by Law 133/2008 and
Law 160/2010 did nothing to change the state of affairs as it merely made it
obligatory to provide online access to the One-Stop Shop business services and for
the public agencies involved to use ICT to expedite their information flows.
Italy’s One-Stop Shop: A Case of the Emperor’s New Clothes? 35

The launch of the www.impresainungiorno.gov website unified the SUAP


front-office, thus complying with the requirement to establish the Single Point of
Contact (SPC) called for by European Directive 2006/123/EC. The website enables
the business owner to access any of the SUAPs registered on the online portal and
to submit their SCIA and all the annexed documents. Nevertheless, front-office
standardization left the organization of each SUAP’s back-office activities out in the
cold, failing to define either the inter-organizational workflows or the technological
solutions to use to manage the intra and inter-organizational information flows. So
transforming the legally instituted SUAP into a fully connected digital unit has been
far less efficient and efficacious than it should have been. Obviously, this has
significantly curtailed the virtual SUAP’s role in helping to significantly simplify
business and government relations.
That situation is confirmed by the SMMEs’ rating of the PA’s online services
(inclusive of the services delivered by the online SUAP) efficiency gap in 2013,
which at 42.7 % remains high, improving only slightly from 2009 (44.1 %). An
even more negative result is the SMMEs’ rating of the efficacy of the latest SUAP
interventions (especially those aimed at transition to full VSC) that meant to sim-
plify government-business relations. While the rating was fairly high (6.2 on a scale
of 1–10) in 2011 (the SUAP’s launch year after the reform introduced by Law
160/2010) it had already retreated to 5.5 in 2012, falling even further in 2013, when
it sank to 4.3.

7 Conclusions and Implications

The administrative simplification issue has again come to the fore of Italy’s cultural
and political debate, mostly as an effect of the ongoing global financial crisis.
Companies are subject to extreme regulation and unrelenting controls that weigh
heavily on their costs and, thus, their income statements, stopping them from
investing in strategic and growth initiatives. The effects on the economy of better
regulation policies are like those that effectively reduce the fiscal drag but do not
create the same headache of sustainable public finance. That said, the desired result
is equally dependent on the quality of the simplification policies.
The article has mapped the journey of Italy’s municipal One-Stop Business
Services and Information Shops. The findings have built on the relevant literature to
demonstrate how the hurdles continue to thwart the delivery of services from under
one physical or virtual roof, despite the fact that well over a decade has passed since
the first law was enacted.
The One-Stop Shop can only succeed if it is built on the solid foundation of the
PA’s capacity to cultivate a culture of internal and inter-institutional cooperation
with all the external public agencies and offices involved and, thus, to guarantee the
user simpler administrative procedures, timely decisions and the ability to manage
the ‘checks and box-ticking’ side. This could easily have been done with a bit of
forward-thinking on the various coordination actions to put in place, including a
36 W. Castelnovo et al.

review and reengineering of the tasks carried out by each administrative branch and
government level, a review of the information systems, a redesign of the proce-
dures, and a rethinking of the methods used to connect and interact with the private
sector.
Paradoxically, but hardly surprisingly considering the approach to change
management that predominates in Italy [47, 54] the heftiest chunk of the funding
needed to drive change has been poured into producing legislation, with only a
small part invested in the other areas, such as the governance of the simplification
effort led by the SUAP, i.e., the system of coordination and control of the
inter-organizational processes. In other words, despite the highly fragmented,
grid-locked system, the focus was not on the deep causal roots of the problem but
on the more easy-to-tackle superficial aspects [55]. In fact, changing a sector’s
regulatory framework is the easiest part, whereas it is common knowledge that
digital government projects and initiatives are complex endeavours [56].
The authors are not denying the importance of the legislator’s role in change
management and, in fact, believe that legislation is the bedrock of change. No, what
they are saying is that legislation is only one side of the coin, and that it takes more
than just issuing laws to ensure the actual implementation of the desired change.
Which brings us to the question: What is the difference between regulatory change
and organizational change? Well, the first can be planned and is fairly immediate,
while the second can only be partially planned and, most of all, is often a long and
winding road [54, 57]. Basically, the crux of the government-One-Stop Shop issue
is its implementation, i.e., the strategies to pursue and the levers to press in order to
prime the system to make a significant change in its relational approach to SMMEs.
The evidence examined highlights a sometimes tortuous unravelling of decisions
and objectives, which partly changed along the way to accommodate the reactions
of the various stakeholders. It also reveals the constraints and opportunities that
emerged during implementation [58].
From the theoretical standpoint, the study confirms, first, the usefulness of the
framework developed by Hogrebe, Kruse and Nüttgens [11] for interpreting and
comparing the Italian scenario with the four ideal scenarios found in the extensive
international literature. Second, it helps to increase the body of common knowledge
on public organisations and their dealings with the environments with which they
relate by shedding light on the processes associated with the delivery of One-Stop
Shops.
Support for the reflections developed here should be considered only tentative,
given the exploratory nature of this research. In essence, the route taken confirms
that the assessment of public reforms is worth exploring by the academic com-
munity of organization studies.
The paper is not without limitations. First, the fact that the evidence comes
entirely from Italy, which means that caution should be exercised before the
arguments presented here are generalized to other countries or contexts. A second
limitation is the article’s macro perspective, which does not document the virtuous
situations of the many municipalities that have fully complied with the law and set
up a virtual SUAP.
Italy’s One-Stop Shop: A Case of the Emperor’s New Clothes? 37

References

1. Bekkers, V.: The governance of back-office integration. Public Manag. Rev. 9, 377–400
(2007)
2. United Nations: E-Government Survey 2010. Department of Economic and Social Affairs,
New York (2010)
3. OECD: Italy: Reviving Growth and Productivity. OECD, Paris (2012)
4. Howard, C.: Rethinking Post-NPM Governance: The Bureaucratic struggle to implement
one-stop-shopping for government services in Alberta. Public Organ. Rev. 1–18 (2014)
5. Irani, Z., Love, P.E.D., Elliman, T., Jones, S., Themistocleous, M.: Evaluating e-government:
learning from the experiences of two UK local authorities. Inf. Syst. J. 15, 61–82 (2005)
6. Suppa, A., Zardini, A.: The Implementation of a performance management system in the
Italian army. In: Zhou, M. (ed.) Education and Management, Communications in Computer
and Information Science, vol. 210, pp. 139–146. Springer, New York (2011)
7. Wimmer, M.A.: A European perspective towards online one-stop government: the eGOV
project. Electron. Commer. Res. Appl. 1, 92–103 (2002)
8. Glassey, O.: Developing a one-stop government data model. Gov. Inf. Q. 21, 156–169 (2004)
9. Bannister, F.: E-government and administrative power: the one-stop-shop meets the turf war.
Electron. Gov., Int. J. 2, 160–176 (2005)
10. Gouscos, D., Kalikakis, M., Legal, M., Papadopoulou, S.: A general model of performance
and quality for one-stop e-Government service offerings. Gov. Inf. Q. 24, 860–885 (2007)
11. Hogrebe, F., Kruse, W., Nüttgens, M.: One-stop e-Government for small and medium-sized
enterprises (SME): A strategic approach and case study to implement the EU services
directive. Bled 2008 Conference. Bled, Slovenia (2008)
12. Dameri, R.P.: Defining an evaluation framework for digital city implementation. In: The
International Conference on Information Society (i-Society). London (2012)
13. Charih, M., Robert, J.: Government on-line in the federal government of Canada: The
organizational issues. Int. Rev. Admin. Sci. 70, 373–384 (2004)
14. West, D.M.: e-Government and the transformation of service delivery and citizen attitudes.
Publ. Adm. Rev. 64, 15–27 (2004)
15. Guijarro, L.: Interoperability frameworks and enterprise architectures in e-government
initiatives in Europe and the United States. Gov. Inf. Q. 24, 89–101 (2007)
16. Colarullo, F., Di Mascio, R., Virili, F.: Meccanismi di coordinamento nei SUAP (Sportelli
Unici per le Attività Produttive): il caso Enterprise. VII Workshop dei Docenti e dei
Ricercatori di Organizzazione Aziendale, Salerno (2006)
17. Vaast, E., Binz-Scharf, M.C.: Bringing change in government organizations: evolution
towards post-bureaucracy with web-based IT projects. In: International Conference on
Information Systems (ICIS). Paris (2008)
18. Spagnoletti, P., Za, S.: A design theory for e-Service Environments: The interoperability
challenge. In: Snene, M. (ed.) IESS 2012. Springer, New York (2012)
19. Ongaro, E.: Process management in the public sector: The experience of one-stop shops in
Italy. Int. J. Publ. Sect. Manag. 17, 81–107 (2004)
20. Kraemer, K., King, J.L.: Information technology and administrative reform: will e-government
be different? Int. J. Electron. Gov. Res. 2, 1–20 (2006)
21. Mele, V.: Explaining programmes for change: Electronic government policy in Italy
(1993-2003). Publ. Manag. Rev. 10, 21–49 (2008)
22. Leeuw, F.L., Leeuw, B.: Cyber society and digital policies: Challenges to evaluation?
Evaluation 18, 111–127 (2012)
23. Hansson, F., Norn, M.T., Vad, T.B.: Modernize the public sector through innovation? A
challenge for the role of applied social science and evaluation. Evaluation 20, 244–260 (2014)
24. Ricciardi, F., Rossignoli, C., De Marco, M.: Participatory networks for place safety and
livability: organisational success factors. Int. J. Networking Virtual Organ 13, 42–65 (2013)
38 W. Castelnovo et al.

25. Spagnoletti, P., Federici, T.: Exploring the Interplay between FLOSS adoption and
organizational innovation. Commun. Assoc. Inf. Syst. 29, 279–298 (2011)
26. Janssen, M., Kuk, G., Wagenaar, R.W.: A survey of Web-based business models for
e-government in the Netherlands. Gov. Inf. Q. 25, 202–220 (2008)
27. Kohlborn, T., Weiss, S., Poeppelbuss, J., Korthaus, A., Fielt, E.: Online service delivery
models—an international comparison in the public sector. In: Proceedings of the 21st
Australasian Conference on Information Systems (ACIS). Brisbane, Australia (2010)
28. Peters, C., Kohlborn, T., Korthaus, A., Fielt, E., Ramsden, A.: Service delivery in one-stop
government portals–observations based on a market research study in Queensland. In:
Proceedings of the 22nd Australasian Conference on Information Systems (ACIS). Brisbane,
Australia (2011)
29. Braccini, A.M., Spagnoletti, P.: Defining cooperative business models for inter-organizational
cooperation. Int. J. Electron. Commer. Stud. 3, 229–249 (2012)
30. Agranoff, R.: Human services integration: Past and present challenges in public
administration. Publ. Adm. Rev. 51, 533–542 (1991)
31. Milward, H.B., Provan, K.G.: Governing the hollow state. J. Publ. Adm. Res. Theor. 10, 359–
379 (2000)
32. Ho, A.T.K.: Reinventing local governments and the e-Government initiative. Publ. Adm. Rev.
62, 434–444 (2002)
33. Bellamy, C.: Transforming social security benefits administration for the twenty-first century:
Towards one-stop services and the client group principle? Publ. Adm. 74, 159–179 (1996)
34. Peters, B.G.: Managing horizontal government: The politics of co-ordination. Publ. Adm. 76,
295–311 (1998)
35. Wilkins, P.: Accountability and Joined-up government. Aust. J. Publ. Adm. 61, 114–119
(2002)
36. Pollitt, C.: Joined-up government: A survey. Polit. Stud. Rev. 1, 34–49 (2003)
37. Pollitt, C., Bouckaert, G.: Public management reform. In: A Comparative Analysis: New
Public Management, Governance, and the Neo-Weberian State, 3 ed. Oxford University Press,
Oxford (2011)
38. Dunleavy, P., Margetts, H., Bastow, S., Tinkler, J.: New public management is dead. Long
live digital-era governance. J. Publ. Adm. Res. Theor. 16, 467–494 (2006)
39. Christensen, T., Lægreid, P.: Complexity and hybrid public administration -theoretical and
empirical challenges. Publ. Organ. Rev. 11, 1–17 (2010)
40. Christensen, T., Lægreid, P.: The whole-of-government approach to public sector reform.
Publ. Adm. Rev. 67, 1059–1066 (2007)
41. Howard, C., Langford, J.: The service state: Rhetoric, reality and promise, vol. 25. University
of Ottawa Press, Ottawa (2010)
42. Bouckaert, G., Peters, B.G., Verhoest, K.: The coordination of public sector organizations:
Shifting patterns of public management. Palgrave Macmillan, Basingstoke (2010)
43. Christensen, T.: Post-NPM and changing public governance. Meiji J. Polit. Sci. Econ. 1, 1–11
(2012)
44. McNulty, T., Ferlie, E.: Process transformation: Limitations to radical organizational change
within public service organizations. Organ. Stud. 25, 1389–1412 (2004)
45. Kuhlmann, S., Wollmann, H.: Introduction to Comparative Public Administration:
Administrative Systems and Reforms in Europe. Edward Elgar, Cheltenham (2014)
46. Yin, R.K., Davis, D.: Adding new dimensions to case study evaluations: The case of
evaluating comprehensive reforms. New Dir. Eval. 113, 75–94 (2006)
47. Zardini, A., Rossignoli, C., Mola, L., De Marco, M.: Developing municipal e-Government in
Italy: The city alfa case. In: Fifth International Conference on Exploring Services Science
(IESS 2014). Geneva (2014)
48. Caporarello, L., Viachka, A.: Individual readiness for change in the context of enterprise
resource planning system implementation. In: D’Atri, A., De Marco, M., Braccini, A.M.,
Cabiddu, F. (eds.) Management of the Interconnected World, pp. 89–96. Springer, New York
(2010)
Italy’s One-Stop Shop: A Case of the Emperor’s New Clothes? 39

49. Mattarella, B.G., Natalini, A. (eds.): La regolazione intelligente. Un bilancio critico delle
liberalizzazioni italiane. Passigli Editore, Bagno a Ripoli (2013)
50. Capgemini IDC, Europe, R., Sogeti, DTi: Digitizing public services in Europe: Putting
ambition into action: 9th benchmark measurement. Technical Report by European
Commission (2010)
51. Castelnovo, W.: A country level evaluation of the impact of the e-government: the case of
Italy. In: Gil-Garcia, J.R. (ed.) E-Government success factors and measures: concepts,
theories, esperiences, and practical recommendations. IGI Global, Hershey (2013)
52. PromoPA: Imprese e burocrazia. Come le piccole e micro imprese giudicano la Pubblica
amministrazione. Franco Angeli, Milano (2013)
53. Mola, L., Carugati, A.: Escaping ‘localisms’ in IT sourcing: tracing changes in institutional
logics in an Italian firm. Eur. J. Inf. Syst. 21, 388–403 (2010)
54. Sorrentino, M., De Marco, M.: Implementing e-government in hard times: When the past is
wildly at variance with the future. Inf. Polity 18, 331–342 (2013)
55. Battistelli, F.: Managerializzazione e retorica. In: Battistelli, F. (ed.): La cultura delle
amministrazioni pubbliche fra retorica e innovazione, pp. 23–45. Franco Angeli, Milano
(2002)
56. Luna-Reyes, L.F., Melloulib, S., Bertot, J.C.: Key factors and processes for digital government
success. Inf. Polity 18, 101–105 (2013)
57. Pennarola, F., Caporarello, L.: Enhanced class replay: Will this turn into better learning? In:
Wankel, C., Blessinger, P. (eds.) Increasing Student Engagement and Retention Using
Classroom Technologies: Classroom Response Systems and Mediated Discourse
Technologies, pp. 143–162. Emerald Group Publishing, Bingley (2013)
58. Sorrentino, M., Passerini, K.: Evaluating e–government initiatives: the role of formative
assessment during implementation. Electron. Gov. 9, 28–141 (2012)
The Determinants of IT Adoption
by SMEs: An Agenda for Research

Riccardo Spinelli

Abstract The determinants of IT adoption by small and medium-sized firms have


been widely investigated in literature. In this theoretical paper, we aim to collate the
vast list of IT adoption barriers and incentives which have been identified, to
explore areas which are well established and equally to highlight areas which are
underdeveloped or ignored in literature and could provide directions for future
research. The value of our work is that it combines perspectives from various
literature streams on the many determinants of the process of IT adoption in SMEs.
Furthermore, this process of combination yields a conceptual basis for further
research into IT adoption by SMEs, through the identification of under scrutinized
research areas which could be addressed in further studies.

Keywords SMEs  IT adoption  Drivers  Inhibitors

1 Introduction

The study of the determinants of IT adoption by small and medium-sized enter-


prises (SMEs) is a major topic in both the IT and small business literatures, as
confirmed by the large number of articles focused on that issue [1]. SMEs’ dif-
ferences with respect to large companies—especially in terms of resource con-
straints and the impact of the individual owner/manager exerting a high degree of
control in decision making [2]—suggests a specific approach to research, as these
influencing factors may be uncharacteristic of large firms and the extent of their
influence may be strongly correlated with firm size. The vast research which has
been done on the topic results in a «seemingly infinite list of e-commerce adoption
barriers [and, we add, ‘incentives’] in SMEs» [3, p. 9], that are not always fully

R. Spinelli (&)
Department of Economics and Business Studies, University of Genoa, Genoa, Italy
e-mail: riccardo.spinelli@economia.unige.it

© Springer International Publishing Switzerland 2016 41


C. Rossignoli et al. (eds.), Organizational Innovation and Change,
Lecture Notes in Information Systems and Organisation 13,
DOI 10.1007/978-3-319-22921-8_4
42 R. Spinelli

consistent with each other. In this paper we aim to encompass these factors into a
holistic framework, to explore areas which are well established and equally to
highlight areas which are underdeveloped or ignored in literature and which could
provide directions for future research.
In the next section, drivers and inhibitors are reported, organized into a
multi-dimensional framework. Then a discussion section follows, where reflections
about possible research developments are proposed. Finally, some conclusions
recap the main results of the study.

2 An Overall Framework for Adoption Factors

The drivers and inhibitors (hence ‘factors’) discussed in this paper emerge from a
broad analysis of a wide set of papers which address the issue of IT adoption in
SMEs. With respect to the body of literature considered, some limitations must be
noted. First, we only paid attention to factors whose relevance is corroborated by
empirical analysis. Second, IT is variably defined—in a wider or narrower way—by
different authors: we decided, as in [1], to adopt an inclusive approach in defining
IT as including Internet-based solutions (e-business, e-commerce, etc.), functional
(CAD, CAM, etc.) and integrated (EDI, ERP, CRM, etc.) applications, together
with hardware, software and communication devices. Third, we approached drivers
and inhibitors from a neutral point of view—calling them “factors” and avoiding an
a priori classification; as reported by [4], this can help address inconsistent results
regarding a given factor [5], subject to variability in the specific setting of each
research in terms of data collection methodology, country, type of firms, intervie-
wee, IT development level etc. Finally, the terminological discrepancy among
different authors has been overcome by gathering together factors which were
differently named in spite of a common meaning.
Several alternative solutions have been proposed in literature to organize the set
of adoption factors [1–4, 6–8]. We opted for an adapted TOE [9] framework,
which, in our view, allows us to keep the focus on the firm as the unit of analysis
and includes both internal and external determinants. Other widely accepted models
(such as the TAM/TPB [10, 11] or the UTAUT [12]) pay far greater attention to the
individual/user level of analysis, with a consequently narrower focus. Nevertheless,
we do not fully discard the user-based approach: we partially encompass it within
the organizational environment of the TOE framework, by paying specific attention
to the characteristics of the decision makers (SME’s owner manager or top man-
agement). Indeed, the role they directly play in IT decisions is greater in SMEs than
in larger firms [2]; in major companies, the IT function is usually more structured
and formalized within the organization and the impact of individuals (even of the
top hierarchical level) is far more mediated by organizational structures, formalized
routines and procedures [13].
The Determinants of IT Adoption by SMEs … 43

The structure of the framework—adapted for our purposes—is the following:


• technological environment;
• organizational environment
– decision maker’s features
– firm structural features
• external environment
The different sections of the framework are now introduced.

2.1 Technological Environment

With respect to the technological environment, we first make reference to the


perceptual features of technology identified by [14] in the DOI framework: relative
advantage, compatibility, complexity, trialability and observability.
Relative advantage over alternative solutions is one of the most relevant factors
[4, 5, 15–17] and is usually intended as the perceived extent of the strategic, tactic
and monetary benefits derived by the firm through IT [18]. As expected, relative
advantage has the highest impact on adoption decisions in a vast set of studies in
different countries [19–23]. Compatibility too plays an important role in IT adop-
tion decisions at a firm level [24], especially if intended in the wider meaning of
consistency between the to-be-adopted technology, the present IT infrastructure and
the established processes and routines; when missing, it is often reported as one of
the strongest inhibitors for IT adoption [25, 26]. The perception of IT complexity is
associated with IT knowledge and skills in both top management and staff and is
often highlighted as a strong [27–29] but not necessarily the strongest inhibitor of
IT adoption [30]. Moving to trialability, it is particularly critical in SMEs, where
uncertainty about the potential benefit—together with resource scarcity—tends to
restrain the push to innovation because of the fear of failure and low-return
investments [31]. Observability, in turn, has been widely analyzed in literature [21,
22], as the visibility of the benefits associated with new technologies heavily drives
their adoption.
Cost is another technology-related factor—not included in the DOI model—
which plays a major role in IT adoption in spite of the general fall in the price of
hardware, software and IT services. This is true both in “rich” and “poor” countries
[32]—with a major stress in developing countries [33, 34]—and for both firms
already investing in IT and new to IT [5]. Besides absolute cost level, their per-
ception by top management [19] is a relevant factor too; moreover, other
cost-related factors may restrain IT investment, such as the difficulty of estimating
indirect costs associated with IT [18, 35], with staff training [7] and with the
temporary loss of productivity caused by the changes in the processes design [36].
This heavy focus on costs negatively influences IT innovation in SMEs, as it puts
an excessive pressure on the search for short-term returns [37].
44 R. Spinelli

A last technology-related factor influencing IT adoption is perceived reliability


in terms of security and privacy. Those concerns can inhibit IT adoption in SMEs:
they have particular influence on non-users or new-users and tend to gradually
disappear as the firm gains experience with IT [5, 38].

2.2 Organizational Environment

Within the organizational environment, the characteristics of the decision-maker


can strongly influence extent and modes of IT adoption [13]. In SMEs, this role is
usually played by the owner-manager, «the prime information user and the key
decision-maker» [39, p. 44], or in the most structured cases by the top management.
Hereinafter we make reference to the top management, assuming that in many cases
it coincides with the owner-manager. With respect to the demographic features of
the top management, its average age is usually negatively correlated with IT
adoption [40, 41]; similarly, an heterogeneous top management in terms of gender
or geographical provenance is often associated with lower adoption rates, due to a
lower inclination towards IT adoption by female [42] and “ethnic” managers [43–
45]. Another set of top-management factors is related to education and professional
experience, both generally and IT-focused. Generic education has a positive impact
[41], as well as long work experience in the industry [46]. As expected, however,
much stronger is the positive impact of an IT education; several studies confirm its
positive influence on IT adoption [5, 20, 47–49]: they all find that top manage-
ment’s IT knowledge speeds up IT adoption, as it reduces the uncertainty, in terms
of both risks and benefits, associated with technological innovation. This consid-
eration leads us to the most cited factor, that is, top management commitment to IT
investment. This is expressed, for instance, in the priority given to IT projects [47].
Caldeira and Ward [50], in particular, find a significant positive influence for top
management’s support both on the launch and the success of IT integration process,
in line with the results—among others—of [47, 48, 51]. Symmetrically, lack of
commitment turns out to be among the most critical issues for SMEs aiming to
adopt IT [52, 53], together with other traits of decision-makers’ personality such as
the attitude towards new technologies [54–56] and the propensity towards inno-
vation [16, 57].
Moving to firm-level organizational factors, a set of demographic features can be
introduced. The most important is certainly size, whose influence in IT adoption has
been widely investigated [17, 18, 55, 58–60]. The correlation is quite ambiguous in
sign: on the one side, the endowment of (human, managerial, finance, etc.)
resources and the sophistication of needs—which both tend to grow with size—
should be a stimulation and support for technological innovation [61, 62]; on the
other side, conversely, other attributes positively connected with innovation—such
as organizational flexibility, response speed to environmental changes, capability of
adaptation and reconfiguration—tend to be greater in smaller firms, as bigger size
also increases organizational rigidity [63]. As a result, the majority of the analyzed
The Determinants of IT Adoption by SMEs … 45

papers find a positive correlation between size and IT adoption (see among others
[17, 59, 64–66]); nevertheless other studies return a not statistically significant
relationship [18, 38].
As of the industry of activity, Porter and Millar [67] found a significant vari-
ability in the importance and role of IT, which is confirmed in our analysis even if
the literature does not always return consistent results. According to [68], British
SMEs in high-tech and knowledge intensive industries show higher IT adoption
rates than other manufacturing or service firms, which on the contrary do not differ
significantly from each other, as in [59] too. Other contributions, such as [42],
contradict this point and find higher adoption among service firms than manufac-
turing. Overall, the correlation between IT adoption and industry remains
under-investigated and poorly verified.
As regards the firm’s strategic orientation, an aggressive growth- [13] and
innovation-oriented [65] strategy is another strong driver for IT adoption: in a
hostile and complex environment, most active SMEs react by entering new markets,
creating new product/market combinations and pursuing technological leadership
thanks to IT [69]. The above mentioned study by Raymond et al. [46], for instance,
finds a more intense adoption of e-business in innovative firms in terms of market,
product and technology. Wymer and Regan [5], instead, find that propensity to
innovation is one of the three most important drivers for e-commerce adoption, both
for already-adopters and would-be-adopters. An innovation-focused strategy, in
general, is often associated with past experiences of new technologies (not neces-
sarily IT) adoption and several study confirm that these experiences significantly
support IT adoption too [15, 38, 70]. Much less clear is the relationship between
competitive strategy and IT adoption: Bayo-Moriones and Lera-Lopez [66] report
contrasting results of several studies which seem to find evidence of a strategic
interest towards IT in both cost leadership and differentiation approaches.
With respect to the firm endowment of resources and competences [71], both [4]
and [19] note that the perception of its own resources and capabilities by the firm’s
management is a stronger driver for IT adoption than their absolute value: quite
often (see among others [19]) IT projects are abandoned due to a “perceived” and
prejudicial incompatibility with, for instance, the IT skills of the staff, without any
real test proving it. Among firm resources, available funds play an important role in
driving or inhibiting IT investment [5, 51]: SMEs usually fight against capital
shortage, which puts pressure on the investment selection process due to the
potential consequences on the firm’s overall financial stability of wrong or sub-
optimal investments [72]. IT investment, in particular, usually have medium to long
return periods [7] which tend to discourage the top management of SMEs. Human
resources are the other resources which most influence the IT adoption process.
A great importance, in fact, is given in literature to the firm’s staff—that is those
individuals who are asked to use the new systems—in shaping IT investment
policies. Igbaria et al. [73], for instance, relate better results in IT adoption with a
higher staff involvement, which in turn makes them feel part of the innovation
process and increases their motivation; moreover, human resources can be a major
source of suggestions about system improvements or the choice of the applications
46 R. Spinelli

to implement [48]. Active inclusion of staff into technology innovation processes


increases their acceptance of new systems, which is a main driver of success for
many IT-related projects [20, 54]; on the contrary, as confirmed among others by
[7] and [30], a negative approach is a powerful inhibitor for new systems imple-
mentation. The employees’ propensity towards new technologies is a direct con-
sequence of their IT-related competences too: it is commonly accepted in literature
that the lack of internal IT knowledge, both in managers, supervisors and final users
can slow down and even stop IT adoption in SMEs [20, 50, 52]. Many studies
support this view. Merthens et al. [15], in particular, highlight the importance of IT
skills for the whole staff, not limiting to the more “technical” positions, as
“non-technical” positions are those who daily use IT in their job.

2.3 External Environment

The external environment is firstly made up of the competitive environment, which


includes several subjects (competitors, providers, customers, etc.) capable of
heavily influencing IT adoption in SMEs.
Competitors can surely play a major role as the need to “keep up with them”
forces the firm to invest in technology, in order to be able to offer similar levels of
performance in comparable conditions of efficiency and effectiveness [20, 21, 74].
A positive correlation between perceived competitive pressure and propensity
towards IT investment is found, among others, in [5, 17].
Customers and suppliers too are important factors [75] because, when techno-
logically advanced, they can force SMEs to adopt specific technology to integrate
the supply chain [76]; on the contrary, when they are limited or late adopters they
can discourage any investment in IT, due to the fear of incompatibility or of limited
possibilities of use in a non-digitalized industry [26]. From this point of view, as
found by [4], the adoption of industry standards (for example in data sharing and
transmission along the supply chain)—usually driven or even imposed by major
firms [15]—forces SMEs to conform to the standards and, at the same time, it also
guarantees more certain returns for their investment than in industries without a
dominant standard.
IT suppliers and consultants also play an important role in the “IT education” of
SMEs which, as aforementioned, usually lack specific internal resources and also
the funds to adequately train them [7, 52]. Suppliers and consultants consequently
take part in the firm learning process more actively and deeply than with larger
firms; this is particularly true for consultants, who support the top management in
overcoming the cognitive barriers which prevent them to invest in IT [47]; nev-
ertheless, SMEs often avoid contact with these potential partners, due to perceived
affordability [52], lack of internal expertise to make an informed choice or lack of
trust in outside sources of advice [77].
The very last element of the framework is the general environment, which
includes extra-competitive drivers that can heavily impact on SMEs’ propensity
The Determinants of IT Adoption by SMEs … 47

towards IT adoption. We first consider the availability of a technological infra-


structure able to guarantee fast, safe and cheap access to the Internet; this is a major
issue in both “poor” and “rich” countries, but it is in developing countries where the
lack or the limitations of the infrastructure have the strongest impact on firms’
choices [78, 79]. Several papers return consistent results with respect to SMEs in
Northern [80] and Sub-Saharan Africa [30, 34, 81], Middle East [51] and India
[82]; nevertheless, scarcity of broadband connections outside major urban centers
may be a problem in developed countries too, as in Australia [48] or the United
States [83], where big distances limit capillarity and performances of the tele-
communication infrastructure. Similarly, the regulatory framework for electronic
transactions is an important factor [79, 84]: uncertainty about the rules inhibits IT
adoption both in developing [81] and developed countries [5]. IT adoption by
SMEs may sometimes be stimulated by governments in a “coercive” way, by
forcing them to adopt standards or IT-based procedures if they want to make
transactions with the public administration [19, 85]. More in general, a strong
impulse is given by public programs of incentive for IT adoption by SMEs, which
provide not only financial support but also information and education on new
technologies; however, if in some circumstances these programs are decisive in
guiding the firms’ choices [30, 86, 87], in others they do not have any significant
impact [20, 21].

3 Discussion

The literature review just presented offers several suggestions about those aspect of
the IT adoption by SMEs which seem to have already been widely investigated and
those which deserve more study.
First of all, past research seems to have been strongly influenced by an IT-based
approach, both in methodology and object of analysis. Many studies, in fact, have
adopted models grounded in the IT literature (TRA, TAM/TPB) and applied them
to the relationship between information technology and users in SMEs; this
explains the large number of studies which, for instance, apply regression or
structural equation modelling to find significant correlation between a set of tech-
nology- or user-related variables and the actual adoption of IT. These analysis are
certainly valuable and cast light on the adoption process at the individual level but,
in our opinion, may fail to give full explanation of IT adoption by SMEs when the
unit of analysis is the firm as a whole and not the single user. As a consequence, this
research stream may be less promising if the analyst aims to trace IT adoption back
to a wide set of implementation determinants, encompassing both technological,
organizational and environmental factors.
We are quite critical towards those studies as well which try to correlate strategic
orientation and IT adoption. In this case, we identify a conceptual issue which in
our opinion undermines the approach; in fact, it assumes that IT adoption is a
dependent variable influenced by independent variables which are measurable items
48 R. Spinelli

connoting a specific strategy: from our point of view, IT adoption is part and not a
consequence of any strategic choice and, as a consequence, a cause-effect corre-
lation analysis may not be appropriate.
On the contrary, an interesting area which in our view deserves attention lies
with the effect of industry-related factors. The analysis of the correlation between
industry and adoption returns vague results, but this may be due to a wrong
approach to the issue: in our opinion, the industry variable should not be assumed
as a direct input in the regression, but as a moderator variable. In other words, it is
not so relevant to find differences in the adoption rate according to the industry of
the firms; it could be more interesting to study how the sectorial environment
(eventually) changes the sign and extent of the influence of other technological,
organizational and environmental factors. We expect significant results from such a
study, which could also contribute to the creation of more tailored support programs
by public and private agencies for IT adoption in SMEs. Many SMEs remain in fact
dissatisfied with government business advice services as lacking in value and not
displaying an understanding of their specific needs [88, 89].
Finally and strictly connected with the abovementioned considerations, a field of
study we consider to be potentially fruitful concerns the results of the support
interventions to give impulse to IT adoption by SMEs. In fact, a pervasive skep-
ticism towards public support seems to emerge, due to the misalignment between
firms’ needs and implemented actions [72], often accused to be too generic and not
tailored enough on the specific requests [90]; a proper investigation of the actual
effects of those programs is consequently needed; the critical issue, in our opinion,
is trying to measure their effectiveness not only in terms of “quantity” of IT
adopted, but also in terms of actual effects on the performances and operational
routines of the firms which have benefited of the support.

4 Conclusions

In this paper we have tried to organize in an original framework the vast literature
which addresses the topic of the determinants of IT adoption by SMEs. The main
objective was to identify well established research areas and equally to highlight
areas which are underdeveloped or ignored in literature and which could provide
directions for future research.
Our results return a very composed set of factors which have an influence of IT
adoption, which can be traced back to three main areas: technological, organiza-
tional and institutional environment. As assumed, many of these factors have
already been widely explored in literature and offer limited perspectives for further
research. On the contrary, other areas—especially those related with industry-based
factors and support programs—seem to be more promising, in particular if
addressed in novel ways from a conceptual and methodological point of view.
The Determinants of IT Adoption by SMEs … 49

This final consideration represents, in our view, a stimulus for scholars who are
interested in the determinants of IT adoption by SMEs, as wide fields of study are
still waiting to be properly explored and could potentially lead to results important
for both researchers and practitioners.

References

1. Ghobakhloo, M., Sabouri, M.S., Hong, T.S., Zulkifli, N.: Information technology adoption in
small and medium-sized enterprises; an appraisal of two decades literature. Interdiscip. J. Res.
Bus. 1(7), 53–80 (2011)
2. Fillis, I., Johannson, U., Wagner, B.: Factors impacting on e-business adoption and
development in the smaller firm. Int. J. Entrep. Behav. Res. 10(3), 178–191 (2004)
3. Chitura, T., Mupemhi, S., Dube, T., Bolongkikit, J.: Barriers to electronic commerce adoption
in small and medium enterprises: a critical literature review. J. Internet Bank. Commer. 13(2),
1–14 (2008)
4. Alzougool, B., Kurnia, S.: Towards a better understanding of SMEs perception of electronic
commerce technology adoption. Interdiscip. J. Contemp. Res. Bus. 2(3), 9–37 (2010)
5. Wymer, S., Regan, E.: Factors influencing e-commerce adoption and use by small and
medium businesses. Electr. Mark. 15(4), 438–453 (2005)
6. Barba-Sánchez, V., Martínez-Ruiz, M., Jiménez-Zarco, A.-I.: Drivers, benefits and challenges
of ICT adoption by small and medium sized enterprises (SMEs): a literature review. Probl.
Perspect. Manag. 5(1), 103–114 (2007)
7. Nguyen, T.H.: Information technology adoption in SMEs: an integrated framework. Int.
J. Entrep. Behav. Res. 15(2), 162–186 (2009)
8. Awa, H.O., Nwibere, B.M., Inyang, B.J.: The uptake of electronic commerce by SMEs: a meta
theoretical framework expanding the determining constructs of TAM and TOE frameworks.
J. Global Bus. Technol. 6(1), 1–27 (2010)
9. Tornaztky, L.B., Fleisher, M.: The process of technological innovation. Lexington Books,
Lexington (1990)
10. Davis, F.D.: Perceived usefulness, perceived ease of use, and user acceptance of information
technology. MIS Q. 13(3), 319–340 (1989)
11. Ajzen, I.: The theory of planned behavior. Organ. Behav. Hum. Decis. Process. 50(2), 179–
211 (1991)
12. Venkatesh, V., Morris, M.G., Davis, G.B., Davis, F.D.: User acceptance of information
technology: toward a unified view. MIS Q. 27(3), 425–478 (2003)
13. Bruque, S., Moyano, J.: Organisational determinants of information technology adoption and
implementation in SMEs: the case of family and cooperative firms. Technovation 27(5), 241–
253 (2007)
14. Rogers, E.M.: Diffusion of innovations. The Free Press, New York (1983)
15. Merthens, J., Cragg, P.B., Mills, A.M.: A model of internet adoption by SMEs. Inf. Manag.
39, 165–176 (2001)
16. Al-Qirim, N.A.: E-commerce adoption in small businesses: cases from New Zealand. J. Inf.
Technol. Case Appl. Res. 9(2), 28–57 (2007)
17. Kannabiran, G.: Enablers and inhibitors of advanced information technologies adoption by
SMEs. An empirical study of auto ancillaries in India. J. Enterp. Inf. Manag. 25(2), 186–209
(2012)
18. Love, P.E.D., Irani, Z., Standing, C., Lin, C., Burn, J.M.: The enigma of evaluation: benefits,
costs and risks of IT in Australian small-medium-sized enterprises. Inf. Manag. 42(7), 947–
964 (2005)
50 R. Spinelli

19. Kuan, K.K.Y., Chau, P.Y.K.: A perception-based model for EDI adoption in small businesses
using a technology-organization-environment framework. Inf. Manag. 38, 507–521 (2001)
20. Fink, D.: Guidelines for the successful adoption of information technology in small and
medium enterprises. Int. J. Inf. Manag. 18(4), 243–253 (1998)
21. Chong, S., Pervan, G.: Factors influencing the extent of deployment of electronic commerce
for small-and medium-sized enterprises. J. Electr. Commerce Organ. 5(1), 1–29 (2007)
22. Tan, K.S., Chong, S.C., Lin, B., Eze, U.C.: Internet-based ICT adoption: evidence from
Malaysian SMEs. Ind. Manag. Data Syst. 109(2), 224–244 (2009)
23. Ifinedo, P.: Internet/E-business technologies acceptance in Canada’s SMEs: an exploratory
investigation. Internet Res. 21(3), 255–281 (2011)
24. Gibbs, J.L., Kraemer, K.L.: A cross-country investigation of the determinants of scope of
e-commerce use: an institutional approach. Electr. Mark. 12(2), 124–137 (2004)
25. Chiarvesio, M., Di Maria, E., Micelli, S.: From local networks of SMEs to virtual districts?
Evidence from recent trends in Italy. Res. Policy 33(10), 1509–1528 (2004)
26. Kartiwi, M., MacGregor, R.C.: Electronic commerce adoption barriers in small to
medium-sized enterprises (SMEs) in developed and developing countries: a cross-country
comparison. J. Electr. Commerce Organ. 5(3), 35–51 (2007)
27. Riemenschneider, C.K., Harrison, D.A., Mykytyn, P.P.: Understanding IT adoption decisions
in small business: integrating current theories. Inf. Manag. 40(4), 269–285 (2003)
28. Jeon, B.N., Han, K.S., Lee, M.J.: Determining factors for the adoption of ebusiness: the case
of SMEs in Korea. Appl. Econ. 38(16), 1905–1916 (2006)
29. Chatzoglou, P.D., Vraimaki, E., Diamantidis, A., Sarigiannidis, L.: Computer acceptance in
Greek SMEs. J. Small Bus. Enterp. Dev. 17(1), 78–101 (2010)
30. Chiliya, N., Chikandiwa, C.K., Afolabi, B.: Factors affecting small micro medium enterprises’:
(SMMEs) adoption of e-commerce in the Eastern Cape Province of South Africa. Int. J. Bus.
Manag. 6(10), 28–36 (2011)
31. Kendall, J.D., Tung, L.L., Chua, K.H., Ng, C.H.D., Tan, S.M.: Receptivity of Singapore’s
SME to electronic commerce adoption. J. Strateg. Inf. Syst. 10(3), 223–242 (2001)
32. Matthews, P.: ICT assimilation and SME expansion. J. Int. Dev. 19, 817–827 (2007)
33. Thulani, D., Tofara, C., Langton, R.: Electronic commerce benefits and adoption barriers in
small and medium enterprises in Gweru, Zimbabwe. J. Internet Bank. Commerce 15(1), 1–17
(2010)
34. Olatokun, W., Kebonye, M.: E-commerce technology adoption by SMEs in Botswana. Int.
J. Emerg. Technol. Soc. 8(1), 42–56 (2010)
35. Cohen, S., Kallirroi, G.: E-commerce investments from an SME perspective: costs, benefits
and processes. Electr. J. Inf. Syst. Eval. 9(2), 45–56 (2006)
36. Love, P.E.D., Irani, Z.: An exploratory study of information technology evaluation and
benefits management practices of SMEs in the construction industry. Inf. Manag. 42(1), 227–
242 (2004)
37. Van Akkeren, J., Cavaye, A.: Factors affecting entry-level internet technology adoption by
small firms in Australia. J. Syst. Inf. Technol. 3(2), 33–47 (2000)
38. Dholakia, R.R., Kshetri, N.: Factors impacting the adoption of the internet among SMEs.
Small Bus. Econ. 23, 311–322 (2004)
39. Levy, M., Powell, P.: Strategies for growth in SMEs. The role of information and information
systems. Elsevier, Oxford (2005)
40. Hunter, K., Kemp, S.: The personality of e-commerce investors. J. Econ. Psychol. 25(4), 529–
537 (2004)
41. Chuang, T.-T., Nakatani, K., Zhou, D.: An exploratory study of the extent of information
technology adoption in SMEs: an application of upper Echelon theory. J. Enterp. Inf. Manag.
22(1/2), 183–196 (2009)
42. Hua, S.C., Rajesh, M.J., Theng, L.B.: Determinants of e-commerce adoption among small and
medium-sized enterprises in Malaysia. In: Thomas, B., Simmons, G. (eds.) E-commerce
adoption and small business in the global marketplace: tools for optimization, pp. 67–85.
Business Science Reference, Hershey (2010)
The Determinants of IT Adoption by SMEs … 51

43. Foley, P., Ram, M.: The use of online technology by ethnic minority businesses: a
comparative study of the west midlands and UK. De Montfort University, Leicester (2002)
44. Beckinsale, M., Ram, M., Thedorakopoulos, N.: ICT adoption and ebusiness development:
understanding ICT adoption amongst ethnic minority business. Int. Small Bus. J. 29(3), 193–
219 (2011)
45. Middleton, K.L., Byus, K.: Information and communications technology adoption and use in
small and medium businesses. The influence of Hispanic ethnicity. Manag. Res. Rev. 34(1),
98–110 (2011)
46. Raymond, L., Bergeron, F., Blili, S.: The assimilation of e-business in manufacturing SMEs:
determinants and effects on growth and internationalization. Electr. Mark. 15(2), 106–118
(2005)
47. Wilson, H., Daniel, E., Davies, I.A.: The diffusion of e-commerce in UK SMEs. J. Mark.
Manag. 24(5–6), 489–516 (2008)
48. Scupola, A.: SMEs’ e-commerce adoption: perspectives from Denmark and Australia.
J. Enterp. Inf. Manag. 22(1/2), 152–166 (2009)
49. Chao, C.-A., Chandra, A.: Impact of owner’s knowledge of information technology: (IT) on
strategic alignment and IT adoption in US small firms. J. Small Bus. Enterp. Dev. 19(1), 114–
131 (2012)
50. Caldeira, M.M., Ward, J.M.: Using resource-based theory to interpret the successful adoption
and use of information systems and technology in manufacturing small and medium-sized
enterprises. Eur. J. Inf. Syst. 12(2), 127–141 (2003)
51. Elahi, S., Hassanzadeh, A.: A framework for evaluating electronic commerce adoption in
Iranian companies. Int. J. Inf. Manag. 29, 27–36 (2009)
52. Cragg, P., Zinatelli, N.: The evolution of information systems in small firms. Inf. Manag. 29
(1), 1–8 (1995)
53. Levy, M., Powell, P., Worral, L.: Strategic intent and e-business in SMEs: enablers and
inhibitors. Inf. Resour. Manag. J. 18(4), 1–20 (2005)
54. Davis, F.D.: User acceptance of information technology: system characteristics, user
perceptions and behavioral impacts. Int. J. Man Mach. Stud. 38(3), 475–487 (1993)
55. Premkumar, G.: A meta-analysis of research on information technology implementation in
small business. J. Organ. Comput. Electr. Commerce 13(2), 91–121 (2003)
56. To, M.L., Ngai, E.W.T.: The role of managerial attitudes in the adoption of technological
innovations: an application to B2C e-commerce. Int. J. Enterp. Inf. Syst. 3(2), 23–33 (2007)
57. Ghobakhloo, M., Arias-Aranda, D., Benitez-Amado, J.: Adoption of e-commerce applications
in SMEs. Ind. Manag. Data Syst. 111(8), 1238–1269 (2011)
58. Burke, K.: The impact of internet and ICT use among SME agribusiness growers and
producers. J. Small Bus. Entrep. 23(3), 173–194 (2010)
59. Bordonaba-Juste, V., Lucia-Palacios, L., Polo-Redondo, Y.: The influence of organizational
factors on e-business use: analysis of firm size. Market. Intell. Plan. 30(2), 212–229 (2012)
60. Higón, D.A.: The impact of ICT on innovation activities: evidence for UK SMEs. Int. Small
Bus. J. 30(6), 684–699 (2012)
61. Teo, T., Pian, Y.A.: Contingency perspective on internet adoption and competitive advantage.
Eur. J. Inf. Syst. 12(2), 78–92 (2003)
62. Hwang, H.S., Ku, C.Y., Yen, D.C., Cheng, C.C.: Critical factors influencing the adoption of
data warehouse technology: a study of the banking industry in Taiwan. Decis. Support Syst. 37
(1), 1–21 (2004)
63. Goode, S., Stevens, K.: An analysis of the business characteristics of adopters and
non-adopters of world wide web technology. Inf. Technol. Manag. 1(1), 129–154 (2000)
64. Buonanno, G., Faverio, P., Pigni, F., Ravarini, A., Sciuto, D., Tagliavini, M.: Factors affecting
ERP system adoption. A comparative analysis between SMEs and large companies.
J. Enterp. Inf. Manag. 18(4), 384–426 (2005)
65. Levenburg, N.M., Schwarz, T.V., Motwani, J.: Understanding adoption of internet
technologies among SMEs. J. Small Bus. Strateg. 16(1), 51–69 (2005)
52 R. Spinelli

66. Bayo-Moriones, A., Lera-López, F.: A firm-level analysis of determinants of ICT adoption in
Spain. Technovation 27(6), 352–366 (2007)
67. Porter, M.E., Millar, V.E.: How information gives you competitive advantage. Harvard Bus.
Rev. 63(4), 149–160 (1985)
68. Drew, S.: Strategic uses of e-commerce by SMEs in the East of England. Eur. Manag. J. 21(1),
79–88 (2003)
69. Özsomer, A., Calantone, R.J., Di Benedetto, A.: What makes firms more innovative? A look at
organizational and environmental factors. J. Bus. Ind. Market. 12(6), 400–416 (1997)
70. Oh, K.Y., Cruickshank, D., Anderson, A.R.: The adoption of e-trade innovations by Korean
small and medium sized firms. Technovation 29(2), 110–121 (2009)
71. Cragg, P., Caldeira, M., Ward, J.: Organizational information systems competences in small
and medium-sized enterprises. Inf. Manag. 48(8), 353–363 (2011)
72. Sarosa, S., Zowghi, D.: Strategy for adopting information technology for SMEs: experience in
adopting email within an indonesian furniture company. Electr. J. Inf. Syst. Eval. 6(2), 165–
176 (2003)
73. Igbaria, M., Zinatelli, N., Cragg, P., Cavaye, A.: Personal computing acceptance factors in
small firms: a structural equation model. MIS Q. 21(3), 279–305 (1997)
74. Pearson, J.M., Grandon, E.E.: An empirical study of factors that influence e-commerce
adoption/non-adoption in small and medium sized businesses. J. Internet Commerce 4(4), 1–
21 (2006)
75. Shih, H.: Contagion effects of electronic commerce diffusion: perspective from network
analysis of industrial structure. Technol. Forecast. Soc. Chang. 75(1), 78–90 (2008)
76. Oliveira, T., Martins, M.F.: Understanding e-business adoption across industries in european
countries. Ind. Manag. Data Syst. 110(9), 1337–1354 (2010)
77. Chapman, P., James-Moor, M., Szczygiel, M., Thompson, D.: Building internet capabilities in
SMEs. Logist. Inf. Manag. 13(6), 353–360 (2000)
78. Kshetri, N.: Barriers to e-commerce and competitive business models in developing countries:
a case study. Electron. Commer. Res. Appl. 6(4), 443–452 (2007)
79. OECD/ECLAC: Latin American Economic Outlook 2013: SME Policies for Structural
Change. OECD Publishing, Paris (2012)
80. Bentahar, Y., Namaci, L.: Identifying factors for the successful adoption of e-business by
SMEs in developing economies: the case of SMEs in Morocco. In: Proceedings of the 2010
World Conference of the International Council for Small Business, pp. 1–14 (2010)
81. Manuere, F., Gwangwava, E., Gutu, K.: Barriers to the adoption of ICT by SMEs in
Zimbabwe: an exploratory study in Chinhoyi District. Interdiscip. J. Contemp. Res. Bus. 4(6),
1142–1156 (2012)
82. Lal, K.: Determinants of the Adoption of E-Business Technologies. Telematics Inform. 22,
181–199 (2005)
83. Passerini, K., El Tarabishy, A., Patten, K.: Information technology for small business.
Managing the digital enterprise. Springer, New York (2012)
84. OECD: ICT, E-Business and SMEs, OECD Digital Economy Papers, 88. OECD Publishing,
Paris (2005)
85. Arduini, D., Nascia, L., Zanfei, A.: La diffusione delle ICT in Italia: determinanti a livello di
impresa e di sistema industriale. Economia e Politica Industriale 3, 177–192 (2006)
86. Chong, S.: Success in electronic commerce implementation. A cross-country study of small
and medium-sized enterprises. J. Enterp. Inf. Manag. 21(5), 468–492 (2008)
87. Al-Hudhaif, S., Alkubeyyer, A.: E-commerce adoption factors in Saudi Arabia. Int. J. Bus.
Manag. 6(9), 122–133 (2011)
88. Dyer, L.M., Ross, C.A.: Advising the small business client. Int. Small Bus. J. 25(2), 130–151
(2007)
89. Spinelli, R., Dyerson, R., Harindranath, G.: IT readiness in small firms. J. Small Bus.
Enterp. Dev. 20(4), 807–823 (2013)
90. Stockdale, R., Standing, C.: A classification model to support SME e-commerce adoption
initiatives. J. Small Bus. Enterp. Dev. 13(3), 381–394 (2006)
Technology Applied to the Cultural
Heritage Sector has not (yet) Exceeded
Our Humanity

Lucia Marchegiani and Gloria Rossi

Abstract Changes in traditional sectors, such as cultural heritage, have stemmed


from technological innovations, which brought new opportunities for the valoriza-
tion of cultural heritage, as well as new competences requirements. With a specific
focus on cultural organizations, the technology can provide greater efficiency in the
coordination of processes and facilitate the development of new activities that can
generate economic returns. Touristic guides have a prominent role in cultural heri-
tage valorization as they contribute to deliver a full and comprehensive experience to
the visitors. Therefore, the technology adoption and usage of the touristic guides
should have a direct impact on the valorization of cultural heritage through ICTs. In
this paper, we aim at identifying the sensemaking that each actor confers to the
technological innovations, and its impact on cultural heritage valorization.

Keywords Cultural heritage  Touristic guides  Technology acceptance 


Sociomateriality

1 Introduction

Recent technological innovations and applications have impacted traditional sec-


tors, such as the cultural heritage. They have brought about not only new oppor-
tunities for the valorization of cultural heritage, but also unprecedented challenges
for the human resources working in the sector. Technological resources have raised
particular interest because they are considered capable of attracting a wider

L. Marchegiani (&)  G. Rossi


Roma Tre University, Rome, Italy
e-mail: lucia.marchegiani@uniroma3.it
G. Rossi
e-mail: gloria.rossi@uniroma3.it

© Springer International Publishing Switzerland 2016 53


C. Rossignoli et al. (eds.), Organizational Innovation and Change,
Lecture Notes in Information Systems and Organisation 13,
DOI 10.1007/978-3-319-22921-8_5
54 L. Marchegiani and G. Rossi

audience. Multimedia applications are becoming increasingly prominent and pop-


ular among the communication tools used in museums to help visitors know and
understand the exhibits. These applications include: screens of different sizes (often
touch screens); interactive installations and portable devices; virtual and augmented
reality, whose costs have become more affordable. Compared to traditional com-
munication tools (e.g. captions and text panels, guided tours), new technologies
offer the possibility of extending the methods of access and information for the
visitors, from the point of view of both the quantity and the quality. In addition to
text and images, video, sound, and interactive tools can be proposed in simple, fast,
personalized and effective ways. Hence, the use of new technologies as an integral
element in the production of cultural offerings has gradually gained attention in the
literature of economics of culture and cultural heritage management [1, 2].
Nevertheless, cultural heritage may be unfit to apply the old quote by A. Einstein “It
has become appallingly obvious that our technology has exceeded our humanity”.
In fact, this study presents the viewpoint of those who work in the cultural heritage
and who might have a prominent role in mediating the role of technology on the
cultural experience of visitors.
The technological evolution has led to significant changes in contemporary
society and, consequently, the cultural policies have gradually promoted the use of
innovative technologies. This requires workers in this sector to develop techno-
logical competencies. Among the actors that work in the cultural heritage sector, we
focus on touristic guides. In fact they constitute a bridge between the cultural
organizations, which own and manage a given cultural asset, and the audience. Thus,
they a have a prominent role in cultural heritage valorization as they contribute to
deliver a full and comprehensive experience to the visitors. Although they are not
integrated in the cultural organization, they belong to a network of relations, insti-
tutions, and actors that constitute a proper organizational ecosystem [3]. Their
competences are certified and they are engaged by individuals or groups of visitors
to enhance their cultural experience. Therefore, the technology adoption and usage
of the guides should have a direct impact on the valorization of cultural heritage
through ICTs.
With this piece of research we aim at contributing to this stream of research by
focusing on the process of roles redefinition in the cultural fields redefined by
technological innovations and on the sensemaking that each actor confers to the
technological innovations. This paper presents a study and an evaluation of how
ICTs can encourage the enjoyment and enhancement of cultural heritage, in par-
ticular by analyzing the role that the guide takes with respect to the visitors’
experience.
The analysis of the results shows very polarized clusters of actors, as perceptions
of technologies vary from being very enthusiastic to very skeptical. Within this
scenario, our conclusions contribute both theoretically to the IS stream of research
related to the social and human implications of technology, and to the cultural
heritage studies focused on innovative cultural managerial practices.
Technology Applied to the Cultural Heritage Sector … 55

2 Theoretical Background

From an Information Systems perspective, many scholars have dealt with the
impact of Information Technologies on museums, producing an extensive literature
[4–6]. Nevertheless, the extant body of knowledge appears focused on technical
issues, such as the design and usage of the information technologies in museums
and their functionalities. Indeed, a complete analysis should not neglect the social,
organizational, and behavioral aspects that affect the cultural workers as well as the
audience. Both theories and practices show that the success of ICT implementation
relies upon a synergy between the technical factors and other factors that require an
in-depth understanding of the organizational context and human behavior [7, 8].
With a specific focus on Museums, the technology can provide greater efficiency
in the coordination of processes and facilitate the development of new activities that
can generate economic returns [9]. New technologies, therefore, promote innova-
tive managerial practices, organization structures and activities, and most impor-
tantly allow the development of new forms of communication and interaction with
users. Nevertheless, organizational and IS literature is still immature and some
research questions remain unanswered, related to museums and new technology
from a managerial and organizational point of view. One path of research appears to
be particularly poorly beaten. This is referred to the ICT-enabled change man-
agement, and in particular to the redefinition of roles and capabilities within the
system of cultural organizations. Human actors adopt and use technologies in
multiple ways, and cultural actors may shape the implications of technologies as
they integrate them into everyday practice. Prolific streams of research have
flourished in the organizational and IS literature, dealing with the dual nature of
technology (e.g. [10]). Embracing a structuration approach, several Authors have
emphasized that the usage and adoption of technology are linked to the context in
which such technologies are immersed, as well as to the social processes. In par-
ticular, IT is central in the structuration process [11], as IT is seen both as the result
of human actions in a specific social context and as a bundle of rules and resources
embedded in the human actions. Based on the same epistemology, the concept of
sociomateriality [12] has been developed to address the interconnections between
social and technical components and the so-called relational ontology [13, 14]. The
materiality identifies the structural characteristics of technology that do not change
over space and time. The users react to materiality when they adopt the technology
from artifact to social.
Although several streams of research have focused on the adoption and usage of
technology in any given cultural organization [15–17], to the best of our knowledge
little has been written on these issues from a sociomateriality perspective.
Moreover, as the development of inter-organizational networks can deploy inno-
vative ways to valorize the cultural heritage and to pursue sustainable managerial
models in the cultural fields [3, 18], we believe that technology adoption and usage
should be investigated from a network perspective. This implies including in the
analysis not only a focal cultural organizations, but all the actors that have ties with
56 L. Marchegiani and G. Rossi

it. Following this approach, we focus on a group of actors that operate as bridges
between the cultural artifacts and the audience [19]. The research focus is main-
tained on the guides’ opinions and perceptions. Our research questions are the
following: how does each guide perceive changes in her role in the overall cultural
heritage system, with respect to a given technological innovation? ; and how does
she sense her contribution to the visitors’ cultural experience?

3 Methodology

In order to answer such questions, we designed and conducted an on-line survey.


Before the administration of the survey, we carried out an extensive exploratory
research. Several in-depth semi-structured interviews have been completed with
relevant actors in the cultural heritage field, such as: Museums’ Directors
(Sovraintendenti); Touristic Guides Associations; managers of Museums located in
Rome (Italy); managers from the Ministry of Cultural Heritage and Tourism. The
interviews confirmed our preliminary hypotheses that the experts don’t see the
technologies applied to cultural heritage as a limit for tourists. Rather, they think
that these technologies could emphasize the role of guides making the cultural
experience more innovative and interactive. They tend to perceive them as means
through which museums can attract especially young people.
Subsequently, we developed a questionnaire consisting of 28 questions. It is
divided into six blocks, each one exploring a particular aspect of the research. The
target of this research includes the entire population of qualified tourist guides
nationwide, which accounts for 17,000 units. The sample consists of 404 ques-
tionnaires, 22 % of which are men (89 units), and 78 % are women (315 units). The
majority of respondents are aged between 31 and 40 years (38 %), between 41 and
50 years (31 %). The average age is around 42.5 years with a minimum age of
respondents of 21 and a maximum of 77 years.

4 Discussion of the Results

68 % of the tour guides, who responded to the questionnaire, carries out its activities
in Rome, while 20 % of them work in other regions, in particular, the majority came
from Tuscany, Sicily and Campania. Tourist guides pointed out that the most
requested period is between the months of May and June (36 %) and of September
and October (29 %), although for many others it was difficult to identify only two
options, as the period actually highlighted by respondents is more extensive, in fact,
it goes from March until October. The two possible targets that most require a guided
tour are the young people (30%) and the foreign tourists (28 %).
Technology Applied to the Cultural Heritage Sector … 57

After identifying the main characteristics of the reference sample and analyzing
the trend of demand for cultural tourism and technological capabilities of the tour
guides, the analysis of subsequent blocks will allow us to answer to the first
question of our research. Specifically, the third block of questions was aimed to
investigate the relationship that the touristic guides have with technology and their
expectations regarding its possible use during the visits. In particular the following
question: “What is your relationship with technology?” aimed at investigating
guides’ opinion on a Likert scale from 1 to 5, in order to classify the respondents’
general familiarity with technology. The average response was around 3. So,
respondents did not have a bad relationship with technology, but they are not even
the first users of technological innovations, probably this is due to the great age
variance of the target reached by this research. The next questions aimed at ana-
lyzing the specific expectations that different guides have on the use of technology
within a cultural visit. In particular, it was asked whether the new technologies
make the visit more interactive, fascinating, exciting, educational, boring, super-
fluous, unreal or distracting. The radar chart below detects this aspect (Fig. 1). An
analysis of the responses showed that the technology applied to cultural visits, in
the eyes of the respondents, is considered by the minority: boring, unnecessary,
unrealistic and distracting (with low average), on the other side positive adjectives,
such for example, educational, fascinating and exciting, may not exceed the value
place in the middle of scale taken into consideration.
This is probably due to the fact that this phenomenon is still poorly understood
and effectively developed in museums and most of guides have some concerns
about the use of technology applied to cultural heritage. Thus, although the phe-
nomenon may somehow be attractive or be considered interesting, the vast majority
of respondents expressed positive feedback too. The last question of this block
analyzed their opinion about the use of a technological device during a guided tour.
Even in this case, as in the previous one, the average of valuations does not extend
towards extremely positive ratings, but relatively low values indicate that the device
does not overpower the art. Among the options suggested, the highest average
believes that the technological device is a useful support to art.

Fig. 1 Expectations on the


application of new 3.6
technologies on cultural visits
3.1
2

1.9 2.9

1.7
1.6
3.4
58 L. Marchegiani and G. Rossi

22%
32%

5%

2%

6%

12%
21%

Fig. 2 Reasons why a tourist make multimedia tour of museums

The next block of questions has the objective of investigating the knowledge and
experience they touristic guides have in the field of technology applied to cultural
heritage, focusing on augmented reality (Fig. 2). According to the touristic guides
in the sample curiosity is the most cited (32 %) among the different motivations that
drive a tourist to use technological support during a visit, followed by the newness
(21 %), and the availability of on-site media device (22 %).
Nowadays, the use of technology applied to cultural heritage is not seen as an
incentive tool, which could increase substantially the demand for cultural tourism.
The following block of questions in the questionnaire focuses the attention more
specifically to the different expectations that tourist guides have about a possible
relationship between technology and art and knowledge.
The results (Fig. 3) show that although there is still a lot of skepticism about the
use of technology, people don’t believe that this can actually have negative effects
on cultural visits. With respect to the effect that the technology has on the pro-
fessional role of a tourist guide, in fact, there are different views that emerged from

2,9 2,19 3,08 3,25 2,21 2,69 2,47 3,3

Fig. 3 Use of a technological device during a visit


Technology Applied to the Cultural Heritage Sector … 59

3,22 2,78 2,74


2,2

1,95
1,54

Expanded skills Downsized roleConfined role in Enhanced Rendered Reinvented role


supporting professional superfluous
figure role

Fig. 4 Position of the guides after the introduction of technology

the analysis (Fig. 4). The majority of respondents (about 78 %) says that the
technology has not confined the role of tour guide in the sphere of a mere support
(average about 1.95).
At the same time, the results do not strongly indicate positive judgments about
the rise of a new professional tourist guide thanks to the use of the augmented
reality. This is probably due to the difficulty in seeing an opportunity to grow
professionally in the technology itself and the fact that today in Italy there is yet
little development of technology applied to cultural heritage, which can actually
affect in some way, positively or negatively, the opinion of the respondents.
After studying the opinions and expectations of touristic guides about the use of
technology within cultural organizations, the goal of this research was the analysis
of the degree of satisfaction about it. To do this, we ran a regression model that
included the guides’ overall satisfaction as the dependent variable. The independent
variables are: the ability to capture the attention of the guide, the ease of use of the
technological device/technology used, the cultural preparation of the visitors, cul-
tural, professional and personal guidance, the museum’s prestige, the skill in the use
of the guide of technology, the technological competence of the visitors and the
effective media sponsorship of the visit by the museum.
The regression analysis (Fig. 5) highlighted that the elements actually impacting
on satisfaction mainly concern the capacity and skills of a touristic guide and only
for one aspect of the technology, in particular the ease of its use. This confirms what
has emerged from the interviews. Almost every guide had in fact shown that the
cultural and professional guidance was the basic element on which to focus for a
good and efficient visit, since users perceive the use of technology as “viable” only
at marginal levels.
60 L. Marchegiani and G. Rossi

Response is SODD. OVERALL on 8 predictors, with N = 404

Step 1 2 3 4 5
Constant 2,0027 1,4108 1,1254 1,0494 0,9000

Ability G. 0,513 0,385 0,285 0,255 0,245


T-Value 14,66 10,75 7,49 6,75 6,50
P-Value 0,000 0,000 0,000 0,000 0,000

Cult prep G. 0,267 0,223 0,163 0,155


T-Value 8,34 7,06 4,84 4,62
P-Value 0,000 0,000 0,000 0,000

Ease of use tec. 0,222 0,196 0,159


T-Value 6,10 5,44 3,99
P-Value 0,000 0,000 0,000

Cult prep Vis 0,142 0,137


T-Value 4,48 4,33
P-Value 0,000 0,000

Museum’s P 0,090
T-Value 2,20
P-Value 0,029

S 0,573 0,530 0,508 0,496 0,494


R-Sq 34,83 44,46 49,18 51,61 52,19
R-Sq(adj) 34,66 44,19 48,80 51,13 51,59
Mallows Cp 140,8 62,8 25,6 7,5 4,7

Fig. 5 Stepwise Method, regression model

5 Conclusions

It is possible to draw several conclusions from our results. Many figures seem to be
in line with what has already been expressed by some guides or by industry experts
in the exploratory phase of our analysis; whilst others, that we might call
counter-intuitive, can be used to infer innovative insights and contribute to this field
of research. Our sample was poorly homogeneous, taking into account a wide age
group of 21–77 years. They have different working experience, as someone has
worked in this field for only a year and others for more than 45 years. This has
made the analysis of Italian tourism market higher representative. The first difficulty
in the use and appreciation of technology applied to the art, probably, lies in the
generation gap, which sees experienced guides who refused to give up the tradi-
tional methods used for a long period of time. At the same time a large part of the
sample taken into account showed adequate familiarity of technology. This is
probably due to the fact that the composition of the sample included a majority of
guides of young age.
Technology Applied to the Cultural Heritage Sector … 61

The use of innovative technology applied to cultural heritage is still a poorly


developed phenomenon in Italy, so most of the guides still report many doubts
about its actual use in the field. Technology is not seen as a negative, unreal, or
distracting element, which may affect the art in its essence and beauty. On the
contrary, if properly exploited in a museum, it can be considered a useful support
for audience engagement. It may actually help guides connect with the youngster,
who rather see the museum as a synonymous of “boredom” and “inaccessibility”.
70 % of respondents said they were aware of the augmented reality, but only 34 %
of the guides has already had the chance to experience a visit with this instrument or
had really taken advantage of this technology. This confirms that the phenomenon
is not yet widespread in Italy, with the exception of some museums in large cities
like Rome, Florence and Venice. Unfortunately, Museum technology is not con-
sidered an element that could attract tourists to greater and more frequent cultural
visits. Other information emerged from the exploratory phase with respect to the
possible change in the professional role of the touristic guides following the
introduction of technology in cultural organizations [9, 20]. The majority of
respondents did not expressed very positive opinions about the creation of a new
professional role, as had been claimed instead by industry experts, but they have
definitely confirmed that the technology has not rendered superfluous the role or
confined it to a mere figure of support [21, 22]. Many respondents claim that
tourists favor the cultural experience in which the guide interacts with technology,
can create a mix of knowledge, experience and emotions that make the visit
“memorable”. A very useful result is provided by the relation between the overall
satisfaction of the guides with the elements that make a visit effective and efficient.
We can highlight the elements that impact on the effectiveness of the visit, making
the visit memorable: they are mainly the ability of the guides to capture the
attention of the tourist, their cultural background, the ease of use of the techno-
logical device or the technology itself. The technology could be truly means to be
exploited to enhance the cultural heritage, which although now it has been little
exploited, and it could and should primarily be seen as a strategic asset that will
create development [7].

References

1. Kalay, Y., Kvan, T., Affleck, J. (eds.): New heritage: New media and cultural heritage.
Routledge (2007)
2. Corradini, E., Campanella, L.: The multimedia technologies and the new realities for
knowledge networking and valorisation of scientific cultural heritage. The role of the Italian
University Museums network. In: Marchegiani, L. (ed.): Proceedings of the International
Conference on Sustainable Cultural Heritage Management. Societies, Institutions, and
Networks, pp. 283–297. ROMA: Aracne (2013)
3. Salvemini, S., Soda, G.: Artwork and Network. Reti Organizzative e Alleanze per lo Sviluppo
dell’industria Culturale, Egea (2001)
62 L. Marchegiani and G. Rossi

4. Keene, S.: Becoming digital. Museum Management and Curatorship, vol. 15, no. 3, pp. 299–
313, Taylor & Francis, Singapore (1996)
5. Ippoliti E., Meschini A.: Media digitali per il godimento dei beni culturali, in Disegnarecon,
vol. 4, no. 8 (2011)
6. Morrissey, K., Worts, D.: A place for the muses? Negotiating the role of technology in
museums. In: Thomas, S., Mintz, A. (eds.) The Virtual and the Real: Media in the Museum
(1998)
7. Markus, M.L., Robey, D.: Information Technology and Organizational Change: Causal
Structure in Theory and Research, Management Science (1988)
8. Marty, P. F.: The changing nature of information work in museums. J. Am. Soc. Inform. Sci.
Technol. 58(1) (2007)
9. Marchegiani, L. (ed.): Proceedings of the International Conference on Sustainable Cultural
Heritage Management. Societies, Institutions, and Networks. ROMA: Aracne (2013)
10. Orlikowski, W.J.: Using technology and constituting structures: a practice lens for studying
technology in organizations. Organ. Sci. 11(4) (2000)
11. Orlikowski, W.J., Robey, D.: Information technology and the structuring of organizations. Inf.
syst. Res. 2(2), 143–169 (1991)
12. Leonardi, P.: Theoretical foundations for the study of sociomateriality. Inf. Organ. 23(2), 59–
76 (2013)
13. Leonardi, P.M., Barley, S.R.: What’s under construction here? social action, materiality and
power in constructivist studies of technology and organizing. Acad. Manag. Ann. (2010)
14. Orliwkowski, W.J., Scott, S.V.: Sociomateriality: challenging the separation of technology,
work and organization. Acad. Manag. Ann. (2008)
15. Sepe, M., Di Trapani, G.: Cultural tourism and creative re-generation: two case studies. In:
International Journal of Culture, Tourism and Hospitality Research, vol. 4, no. 3, pp. 214–227,
received December 2009, accepted March 2010
16. Sher, P.J., Lee, V.C.: Information Technology as a facilitator for enhancing dynamic
capabilities through knowledge management. Inf. Manag. 41(8), 933–945 (2004)
17. Sparacino, F.: The Museum Wearable: real-time sensor-driven understanding of visitors’
interests for personalized visually-augmented museum experiences. In: Proceedings of:
Museums and the Web (2002)
18. Dubini P., De Carlo M.: Integrating Heritage Management and Tourism at Italian Cultural
Destinations, Int. J. Arts Manag. 12(2), (2010)
19. Bagdadli S., Dubini P., Sillano M.T., Landini R., Mazza C., Tortoriello M.: Nuove
professionalità: progettisti per lo sviluppo di sistemi culturali integrati, Rapporto di Ricerca,
CRORA – Università Bocconi (2000)
20. Venkatesh V., Davis F., A theoretical extension of the technology acceptance model: four
longitudinal field studies. Manag. Sci. 46(2) (2000)
21. Child J., Mcgrath R.G.: Organizations unfettered: organizational form in an
information-intensive economy. Acad. Manag. J. 44(6) (2001)
22. Fahy A.: Musei d’arte e tecnologie dell’informazione e della comunicazione. In: Bodo S. (ed.)
Il museo relazionale. Riflessioni ed esperienze europee, Torino, Fondazione Giovanni Agnelli
(2003)
The Impact of the Implementation
of the Electronic Medical Record
in an Italian University Hospital

Alessandro Zardini, Cecilia Rossignoli and Bettina Campedelli

Abstract In the last years the use of the information communication technology
(ICT) has become a leading driver of managerial reform in the public sector [1] and
in particular in the healthcare system [2]. In particular, the Electronic Medical
Record (EMR) is one of the most studied ICT systems in the healthcare manage-
ment literature. Using the Zaharia et al. model [3], in this study we investigate the
implementation of a core element of the EMR, in an university hospital, the
deployment of which is expected to spur internal efficiency and pave the way for
the development of the principles in other departments and/or hospitals. It then
analyses the organizational impacts of EMRs on the healthcare provider’s structure.

Keywords Electronic medical records  Case study  Electronic health records 



EMR impact EHR impact

1 Introduction

In the last years the use of the information communication technology has become a
leading driver of managerial reform in the public sector [1] and in particular in the
healthcare system [2]. In particular, in the last three years, the Electronic Medical
Record (EMR) is one of the most studied ICT systems in the healthcare manage-
ment literature. However, there is not a unique definition of EMR, because it is
depend on the healthcare system, so it is quite different from country to country. In
particular, there are a lot of researchers [4–7] that highlight the negative impact of
the EMR in the American healthcare system. Sinsky et al. [8, pp. 728] emphasized

A. Zardini (&)
Business Administration Department, University of Verona, Via Dell’Artigliere 19, 37129
Verona, Italy
e-mail: alessandro.zardini@univr.it
A. Zardini  C. Rossignoli  B. Campedelli
Department of Business Administration, University of Verona, Verona, Italy

© Springer International Publishing Switzerland 2016 63


C. Rossignoli et al. (eds.), Organizational Innovation and Change,
Lecture Notes in Information Systems and Organisation 13,
DOI 10.1007/978-3-319-22921-8_6
64 A. Zardini et al.

these concerns when they wrote that: “after a decade of growth in the use of EHRs
(Electric Health Record) that has been both promising and painful, we believe it is
time to step back and develop principles for their design, implementation, and
regulation that support higher value primary care”. Unfortunately, the authors
identified only general principles that they are not so useful, because the US hos-
pitals are competitors and they do not want to share patient information. Hence, in
USA it is not easy to develop a shared EMR.
In Italy, the situation is completely different because there is a public healthcare
system. Hence, the hospitals are public and they are not in competitions, but there
are other issues. Nowadays, every regions define the EMR principles, so (in theory)
there are 21 different EMR systems. Moreover, only few hospitals had implement
or are implementing the Electronic Medical Record.
In order to understand what are the main principles, in this paper we used the
Zaharia et al. [3] model, re-elaborated by Buntin et al. [2], and we identified and
categorized the positive impact and the critical factors generated by the imple-
mentation of the Electronic Medical Record in a general medicine department in an
Italian university hospital.
Hence, the paper aims to respond to the following research questions: What are
the positive impacts and the critical factors of introducing EMR in a general
medice department? What factors influence the implementation process?
In the first part, we proceed with the literature review, after we illustrated the
research methodology and approach. It then analysed the introduction of EMRs to
an Italian university hospital and evaluated its impact on the hospital’s organisation.
The paper closes with the authors’ conclusions.

2 The Theoretical Background

Over the past few years, Information Technology (IT) has become a leading driver
of managerial reform in the public sector [1] and in particular in the healthcare
system [2]. Technology is reshaping organizations by blending their Information
Systems with rapidly advancing information and communication technology [9,
10], and it is becoming the catalyst factor for economic growth [2].
Hence, private-sector companies deploy ICT solutions to optimise organisational
performance precisely because of its potential to reduce transaction and agency
costs (principal–agent issues), but also to rationalise their business processes [11,
12]. The introduction of ICT to the public sector is expected to produce similar
results [10]. These are highlighted by Smith et al. [13, pp. 491], who write that “the
impact of Electronic Medical Records sophistication on financial performance
indicate that EMR sophistication is associated with improved revenue cycle man-
agement, and increased ‘Days Cash on Hand’ (DCOH)”.
On the other hand, some academics [1, 4] identified that for the majority of
practices, the return on investment of the EMR was negative, particularly for
smaller practices. Dey et al. [6, pp. 90] reinforce the previous thesis, saying that:
The Impact of the Implementation of the Electronic … 65

“Simply incentivising health care service providers to move up the stages of EMR
capability may not lead to the realization of the potential benefits of the higher
stages of EMR capability. The practical implication of this finding is that health
care service providers need to assess whether their choice of a stage of EMR
capability is commensurate with their idiosyncratic technological, organizational,
and environmental contexts characteristics before committing to a stage of EMR
capability”. Hyman [7] emphasizes these concerns in a paper titled: “The Day the
EHR Died”.
Unlike the previous authors, Bardhan and Thouin [14, p. 442] argue that
‘spending on health IT does matter … and it is important to measure quality
outcomes at the process level, and not only at an aggregate institutional level’. The
authors conclude by saying that the adoption of EMR within US hospitals generates
benefits for both patients and clinics.
As underscored by Hannan [15], the medical record should be the main
‘repository’ of the patient’s medical information, as it not only supports clinical
decisions, but is also a useful tool for other healthcare-related services (adminis-
trative, insurance, quality, epidemiology and so forth). As a result of the close
relationship between medical decisional processes, data accumulation, healthcare
costs and the quality of the health service [16], the quality of clinical treatment, the
efficiency of the health service and the health of citizens call for a medical record
that is an effective decisional-support tool [15, 17]. The EMR is such a tool [18]
because it enables immediate access to encoded and standardised patient infor-
mation and ‘more active decision support’ [19, p. 3] through the alerting, inter-
pretation, assisting, critiquing, diagnosing and management functions [15, 18].
All these benefits are summarized by Shaw [16, p. 200] that re-elaborated the
Schoen et al. [20] model, and he defines the EMR core features as: “the electronic
ordering of tests, electronic access to patients’ test results, electronic prescribing of
medication, electronic alerts for drug interaction, and the electronic entry of clinical
notes. Beyond these core capabilities, physicians may extend features by per-
forming searches on their patient population, creating templates to speed their entry
of notes, set reminders for medical tests, and ensure that non-electronic data are
scanned and linked electronically to the patient record”.
An other important point is that in the literature, there is not an unique definition
of Electronic Medical Records, but it depends on the national healthcare systems
model. Hence, sometimes there is an other issue because the EMR and the EHR are
considered interchangeable terms [21] and comprise all the previous conceptual-
izations [22]; in fact “other similar interpretations exist, albeit with a sometimes
slightly restricted focus” [23, p. 1]. Otherwise in this paper, we cannot interchange
these two terms, because in the Italian Healthcare System they are different.
In this way, we can define EMR as ‘computerized medical information systems
that collect, store and display patient information [24]. They are a means to create
legible and organized recordings and to access clinical information about individual
patients’ [21, pp. 129]. They provide an effective, active decisional-support system,
whether the decisions regard healthcare or management, [15, 18, 19, 25]. A hospital
organisation can expect EMRs to generate key benefits, including enhanced quality
66 A. Zardini et al.

of healthcare, reduction in clinical errors and gains in organisational efficiency,


thanks to lower management costs [15, 19, 25]. Hunt et al.’s [26, p. 1339] review of
the main studies on the information systems that support clinical decisions indicates
that EMRs have increased the clinical performance of ‘drug dosing, preventive care,
and other aspects of medical care’. Further, in their study of the cost/benefits of
EMR for primary healthcare providers, Wang et al. [24, p. 397] note that EMR
adoption has ‘a positive financial return on investment to the health care
organization’.
McDonald [18] reports many cases in which the EMR has enabled healthcare
organisations to reap significant rewards as a result of to its positive impact on both
physician behaviour and healthcare processes. The two main effects of the EMR
identified by the literature review carried out by Hayrinen et al. [22] are, first,
personal—that is, changes in clinical procedures and document management,
improved decisional processes (although the timing remains the same) and the
potential access of patients to their personal records—and, second, organisational—
that is, the effects of an IT system on the communication and cooperation of the
various stakeholders, in particular, document accessibility and the possibility to
re-examine clinical information [27]. The enhanced quality of patient healthcare is a
further important organisational effect.
According to Zakaria et al. [3] and Buntin et al. [2] success or failure of the
projects that introduce the EMR and decisional-support systems depends on many
factors [28]. These key factors can be divided into three categories: organizational
challenge, human/people challenge, and technical/technological challenge. In the
first category, the authours consider organizational costs associated with planning,
specifying requirements, customizing and re-customizing systems, training pro-
viders, and reengineering the delivery of healthcare systems to accommodate
hospitals. Moreover, they define also the concept of organizational culture, and
resistance towards usage of ICT. In the second one, they insert the skills and
expertise of the employee to use new technology, because organizations that fail to
manage their present staff stand little chance of obtaining and retaining outstanding
individuals [3]. In the last category, the ICT and in particular the EMR can enhance
healthcare services electronically where barriers like time, distance and space no
longer matters [3]. Moreover, it helps psysician community to share patient
information and supports them to make the right decision.

3 Case Study

The Alfa university hospital is one of the largest healthcare providers and is
composed of two facilities. The two facilities combined treat an average of 60,000
inpatients per year, 10,000 of whom come from other Italian regions. Daily
admittances total 1,300 for ordinary stays and approximately 400 for day hospitals.
The goal is to automate and computerise the most important organisational
The Impact of the Implementation of the Electronic … 67

processes, the number and complexity of which are far higher than most other
healthcare providers [5].
The EMR is one of the projects currently being developed and implemented by
Alfa. One of the main components of the Electronic Health Record (EHR) is the
EMR, the repository for all the internal information generated by the hospital’s
individual organisational units. Thanks to Gekos system, hospital physicians are
able to view al lot of data, such as: laboratory test values, RX picture, TAC picture,
old documents, and other patients’ data.
However, they are not able to insert, modify or delete data.

4 Methodology and Method

The study uses a qualitative approach to respond to the research question. In


particular, the case study method [29, 30] enables the object of analysis to be
investigated in its natural state by taking into account multiple dimensions that are
difficult to analyse using a quantitative approach [31]. According to Darke et al.
[32, p. 274] ‘case study in research is useful in newer less well-developed research
areas particularly where examination of the context and the dynamics of a situation
are important’.
The case addressed in this paper began with an analysis of the Alfa hospital
during the EMR analysis and implementation phase. Two main reasons led the
authors to select Alfa as their case study. First, this hospital case is particularly
insightful for research into EMR adoption and use because it involves an
e-government tool used by highly complex public healthcare providers [33].
Further, the Alfa hospital has two different, highly structured organisational (uni-
versity and healthcare) identities (spirits) that, while integrated, have specific,
composite natures. Second, the authors were given direct access to the data.
The case study was conducted according to the methods and instructions sug-
gested by Yin [31]. This entailed gathering data through semi-structured interviews,
direct observance and document research. The interviews and the internal docu-
mentation were used as the testing sources. Privileged access to the relevant
information enabled the authors to collect data from several sources, increasing the
quality of the information obtained [34].
The case was analysed using the results of the 11 semi-structured interviews
(each of approximately 40 min duration) held with the hospital staff and designed to
enable the respondents to answer freely, in their own words. Each interview was
attended by two researchers, used the protocol presented by Arksey and Knight [35,
pp. 74–75] and was tape-recorded. The respondents consisted of two managers
from the Alfa healthcare management, five medical physicians, one practicing
doctor, two ward nurses, and one nurse coordinator, all of whom work in the two
hospitals facilities.
The data and results obtained were presented to the main organisational actors
and the board of directors of Alfa hospital through the interview transcriptions and
68 A. Zardini et al.

the interim results of the data-collection phase. The authors used Atlas.ti Computer
Assisted Qualitative Data Analysis Software (CAQDAS) to analyse the data
because it enables organisation and summarisation by concept (for example,
improved collaboration, system adequacy and error reduction). Data collection
commenced in November 2013 and continued for approximately four months. The
analysis and integration of the existing data began in April 2014.

5 Data Analysis and Discussion

As mentioned earlier, in this paper we analysed the impact of the EMR using the
model presented by Zaharia et al. [3], that it was re-elaborated and improved by
Buntin et al. [2]. In the Table 1, we summarized the main factors (nine codes) that
we found during the data analysis and we categorized them in the three categories,
or challenge types, proposed by previous authors [2, 3]. Some of these codes are
reported in the literature, and they influence the impact of the introduction of a new
Electronic Medical Record system.
In particular, in the organizational challenge category, there are five codes,
where two of them the had a positive impact on the organization (reduction of
errors, and knowledge sharing), whereas the others had a negative impact on it.
An important aspect identified by the analysis is the perception of the respon-
dents (10 on 11) of a significant reduction in errors compared with the past. The
interviewed recounted how the former paper-based procedure was more prone to
errors (imprecise requests, imprecise/unreadable medical report, potential misun-
derstandings and the illegibility of handwritten notes). Today, the higher level of

Table 1 The main codes categorized with Zaharia et al. model


Challenge type Code Code Number of
frequency respondents
Organizational Reduction of errors 25 10
Increase of low value-added 19 9
work
Increasing size of 16 10
bureaucracy
Limited capacity to manage 15 8
processes
Knowledge sharing 12 7
Technical/Technological System inadequacy 34 11
(ineffectiveness)
System slowness 23 9
People Better cooperation and 19 9
coordination
Lack of leadership 13 8
The Impact of the Implementation of the Electronic … 69

uniformity and integration of procedures enabled by the standardisation introduced


by the computerised routines has resulted in efficiency gains and reduced organi-
sational errors and redundancies. This was attested to the physician no. 3 (internist
medical doctor): “These systems are useful, because reduce a lot of the main
potential errors, such as: prescribing faults, prescription errors, misinterpretation of
handwriting… and they can better manage the medicines procurement process,
because we can buy medicine that we actually use”.
Moreover, according to Bardhan and Thouin [14], thanks to EMR the knowl-
edge sharing is improved. In fact, the informant no. 1 (internist) explained: “The
system is certainly efficient and useful. It allows us [psysichians] to do much of our
work at the bedside, in real time, and to share information/data with nurses
(diagnosis and therapy)… We can also request the advice of others medical spe-
cialists (i.e. diabetologists, gastroenterologist, etc.) and we can see all patients data
anywhere and anytime”. Otherwise, the other three codes are in contrast with the
literature [15, 16, 18, 24]. In fact, the introduction of the EMR in the internal
medicine had a negative impact on the organization because increased the low value
added-work, and the size of bureaucracy. These concepts can be summarize in the
following quotes:
“Nowadays the EMR is a really waste of time, but as usual, it is a period of
adjustment to fine-tune the processes. There is a phase where users waste time to
find data (about patients data) and to properly use the system, but I hope the in few
mounts we should have some benefits” (practicing doctor). “I noticed that increase
the size of bureaucracy, because I surely waste more time to put in the system the
diagnosis and the appropriate therapy, and in many cases with patients in emer-
gency, I do not have enough time to do (insert and save) all the operations required
by the system” (two internist medical doctors). “The programs (some soft wares
present in the EMR) that we use are not interfaced. Often when we switch from one
program to the other the documents (inserted) are not visible, indeed, very often are
canceled by the systems, so we have to repeat the input. We know that it is a
temporary situation and it should be resolved shortly, but in the daily situations,
especially in large departments like ours, it is a huge limit” (ward nurse and nurse
coordinator).
In the second category (technical challenge), according to Moore et al. [5] and
Dey et al. [6], we identified two codes (system inadequacy, and system slowness)
that they had a negative impact on the EMR acceptance. It is normal to encounter a
certain amount of ‘diffidence’ in the use and/or evaluation of a system during its
start-up or initial phase, given its complexity and the mixed bag of actors involved
[36]. EMR came on stream only a few months ago and that a period of settling in
and comprehension of the potential and criticalities of the new artefact is required
[37]. However, all respondents mentioned the lack of an adequate planning in order
to define which are the technological infrastructure requirements for the EMR
operation. Informant no. 5 (internist) explained: “the 80/90 % of our laptops are too
old (more than eight years), in same areas the Wi-Fi internet access are not
available, and the LAN is undersized, so in some hours of day, it is too slow”.
70 A. Zardini et al.

How well highlighted by Zaharia et al. [3] and Buntin et al. [2], the use of
inappropriate technologies can decrease the quality and the reach of both infor-
mation and communication and it can cause the failure of the projects that introduce
the EMR in this hospital [38, 39].
At the end, in the challenge people, we found the last two codes. Eight of the
eleven informants made specific mention of the leadership adequacy aspect,
underscoring the lack of a clear and established organisational leadership in the
implementation process adopted by this hospital. According to informant no.
4 (physician): “there was no leadership, everything was left to the initiative of a few
people. Nobody asked us, what are our needs, and how we can customize the EMR
in order to be useful, and so on. Moreover, we do not have a trained project
manager, someone who has goals to pursue.
However, the new system has also generated a benefit: the enhanced collabo-
ration between the various organisational actors involved in the process. The
computerisation and standardisation of the procedures have improved the level of
interaction and collaboration, which translates into an activity of comparison and
discussion that can optimise the organisational and work practices of the various
units. The interviewed 3 (physician) explained that: “I think that thanks to the EMR,
I can better collaborate with my colleagues and I can share more data with them
(other specialists). Moreover, the team works are better, because we can better
define what are our tasks, thereby improving the coordination process. Now we
have to implement an EHR, in order to share data/information with the other
hospitals”.

6 Conclusions

In this paper, we analysed the impact and critical factors in implementing a new
Electronic Medical Record in the general medice department of an Italian university
hospital, which represents a particularly complex healthcare structure. In particular,
in order to highlight positive and negative factors, we used the model of Zaharia
et al. [3], that it was re-elaborated by Buntin et al. [2]. According to the previews
model, we subdivided the main codes in three categories (organizational, techno-
logical, and people).
The following codes are the positive impact that we noted:
• a reduction in the number of flaws and errors (imprecise requests,
imprecise/unreadable medical report, potential mis-understandings and the
illegibility of handwritten notes).
• Faster access to clearer and more specific information, enabling physicians to
diagnose patients more promptly.
• Knowledge sharing helps physician, nurse and medical specialist to better
analyse patient information and to find the most appropriate treatment.
The Impact of the Implementation of the Electronic … 71

• Cooperation and coordination process thanks to EMR is developed, because the


system improves the collaboration inter- and intra-team and helps physicians
and nurses to schedule medical examination, prescriptions, and treatment.
However, we identified also some negative impacts that they are quite normal in
the first phase of the EMR implementation. In fact, according to Kucukyazici et al.
[36], during the start-up or initial phase is normal to encounter a certain amount of
‘diffidence’ in the use and/or evaluation of a system, given its complexity and the
mixed bag of actors involved. The main critical factors identified by this paper
were:
• a lot of the interviewed (eight employees) underscored the lack of a clear and
established organisational leadership in the implementation process adopted by
the EMR adoption.
• Almost all interviewed identified that the EMR increased low value
added-works, and the size of bureaucracy. Ten of them told that they waste a lot
of time to find data patients and to properly use all the systems of the EMR.
• The most critical factors that they explained were the slowness and inadequacy
of the network (LAN, and Wi-Fi) and the peripheral devices (laptop, desktop,
and so on) because the personal computer are dated (average more than eight
years) and the network is undersized.
The decision to analyse the EMR and, specifically, the general medice area, has
generated system-specific results; however, these can be extended, with due cau-
tion, to the other IT models and systems of this hospital various operating units, as
well as to those of similar organisations. In fact, the critical factors of the case need
to be taken into account each time a similar project is addressed [36, 40] as useful
references to both improve the systems already in use and progressively develop
and adopt projects to create an effective EMR.

References

1. Moon, M.J.: The evolution of e-government among municipalities: rhetoric or reality? Public
Adm. Rev. 62(4), 424–433 (2002)
2. Buntin, M.B., Burke, M.F., Hoaglin, M.C., Blumenthal, D.: The benefits of health information
technology: a review of the recent literature shows predominantly positive results. Health Aff.
30(3), 464–471 (2011)
3. Zakaria, N., Affendi, M., Yusof, S., Zakaria, N.: Managing ICT in healthcare organization:
culture, challenges, and issues of technology adoption and implementation. In: Zakaria N.,
Affendi, S., Zakaria N. (eds.) Managing ICT in Healthcare Organization: Culture, Challenges,
and Issues of Technology Adoption and Implementation. pp. 153–168, IGI Global (2010)
4. Adler-Milstein, J., Green, C.E., Bates, D.W.: A survey analysis suggests that electronic health
records will yield revenue gains for some practices and losses for many. Health Aff. 32(3),
562–570 (2013)
5. Moore, K.D., Eyestone, K., Coddington, D.C.: Costs and benefits of EHRs: a broader view.
J. Healthc. Financ. Manage. Assoc. 67(4), 126–128 (2013)
72 A. Zardini et al.

6. Dey, A., Sinha, K.K., Thirumalai, S.: IT capability for health care delivery: is more better?
J. Serv. Res. 16(3), 326–340 (2013)
7. Hyman, P.: The day the EHR died. Annu. Intern. Med. 160(8), 576–577 (2014)
8. Sinsky, C.A., Beasley, J.W., Simmons, G.E., Baron, R.J.: Electronic health records: design,
implementation, and policy for higher-value primary care. Ann. Intern. Med. 160(10), 727–
728 (2014)
9. Frenzel, C., Frenzel, J.: Management of information technology (4th edn), Cengage Learning,
Boston, USA (2004)
10. Bekkers, V.: Reinventing government in the information age: international practice in
IT-enabled public sector reform. Public Manag. Rev. 5(1), 133–139 (2003)
11. Braccini, A.M., Federici, T.: IT value in public administrations: a model proposal for
E-Procurement. In: D’Atri A., Saccà D. (eds.) Information Systems: People, Organizations,
Institutions and Technologies, pp. 121–129. Springer, Berlin (2009)
12. Depaoli, P., Za, S.: Towards the redesign of e-Business maturity models for SMEs. In:
Baskerville, R., De Marco, M., Spagnoletti, P. (eds.) Designing Organizational Systems,
pp. 285–300. Springer, Berlin (2013)
13. Smith, A.L., Bradley, R.V., Bichescu, B.C., Tremblay, M.C.: IT governance characteristics,
electronic medical records sophistication, and financial performance in U.S. hospitals: an
empirical investigation. Decis. Sci. 44(3), 483–516 (2013)
14. Bardhan, I.R., Thouin, M.F.: Health information technology and its impact on the quality and
cost of healthcare delivery. Decis. Support Syst. 55(2), 438–449 (2013)
15. Hannan, T.J.: Electronic medical records. Health informatics: an overview, Churchill
Livingstone, Australia (1996)
16. Shaw, N.: The role of the professional association: a grounded theory study of electronic
medical records usage in Ontario, Canada. Int. J. Inf. Manage. 34(2), 200–209 (2014)
17. Lakshminarayan, K., Rostambeigi, N., Fuller, C.C., Peacock, J.M., Tsai, A.W.: Impact of an
electronic medical record-based clinical decision support tool for Dysphagia screening on care
quality. Stroke 43(12), 3399–3401 (2012)
18. McDonald, C.J.: The barriers to electronic medical record systems and how to overcome them.
J. Am. Med. Inf. Assoc. 4(3), 213–221 (1997)
19. Berner, E.S., Detmer, D.E., Simborg, D.: Will the wave finally break? A brief view of the
adoption of electronic medical records in the United States. J. Am. Med. Inf. Assoc. 12(1), 3–7
(2005)
20. Schoen, C., Osborn, R., Doty, M.M., Squires, D., Peugh, J., Applebaum, S.: A survey of
primary care physicians in eleven countries, 2009: perspectives on care, costs, and
experiences. Health Aff. 28(6), 1171–1183 (2009)
21. Ajami, S., Bagheri-Tadi, T.: Barriers for adopting electronic health records (EHRs) by
physicians. Acta Informatica Med. 21(2), 129–134 (2013)
22. Häyrinen, K., Saranto, K., Nykänen, P.: Definition, structure, content, use and impacts of
electronic health records: a review of the research literature. Int. J. Med. Inf. 77(5), 291–304
(2008)
23. Boonstra, A., Broekhuis, M.: Barriers to the acceptance of electronic medical records by
physicians from systematic review to taxonomy and interventions. BMC Health Serv. Res. 10
(231) (2010)
24. Wang, S.J., Middleton, B., Prosser, L.A., Bardon, C.G., Spurr, C.D., Carchidi, P.J., Kittler, A.
F., Goldszer, R.C., Fairchild, D.G., Sussman, A.J., Kuperman, G.J., Bates, D.W.: A cost–
benefit analysis of electronic medical records in primary care. Am. J. Med. 114(5), 397–403
(2003)
25. D’Urso, P., De Giovanni, L., Spagnoletti, P.: A fuzzy taxonomy for e-Health projects. Int.
J. Mach. Learn. Cybern. 4(6), 487–504 (2013)
26. Hunt, D.L., Haynes, R., Hanna, S.E., Smith, K.: Effects of computer-based clinical decision
support systems on physician performance and patient outcomes: a systematic review. J. Am.
Med. Assoc. 280(15), 1339–1346 (1998)
The Impact of the Implementation of the Electronic … 73

27. Basaglia, S., Caporarello, L., Magni, M., Pennarola, F.: Individual adoption of convergent
mobile technologies in Italy. In: D’Atri, A., De Marco, M., Casalino, N. (eds.)
Interdisciplinary aspects of Information systems studies: the Italian Association for
Information systems, pp. 63–69. Physica-Verlag, Heidelberg (2008)
28. Caporarello, L., Viachka, A.: Individual readiness for change in the context of enterprise
resource planning system implementation. In: Proceedings of the 6th Conference of the Italian
Chapter for the Association for Information Systems, pp. 89–96 (2010)
29. Cavaye, A.L.M.: Case study research: a multi-faceted research approach for IS. Inform. Syst.
J. 6(3), 227–242 (1996)
30. Creswell, J.W.: Qualitative Inquiry & Research Design: Choosing Among Five Approaches.
Sage Publications, Thousand Oaks (2007)
31. Yin, R.K.: Case Study Research: Design and Methods, 3rd edn. Sage Publications, Los
Angeles (2009)
32. Darke, P., Shanks, G., Broadbent, M.: Successfully completing case study research:
combining rigour, relevance and pragmatism. Inf. Syst. J. 8(4), 273–289 (1998)
33. Sorrentino, M.: Interpreting e-government: implementation as the moment of truth. In:
Wimmer, M.A., Scholl, J., Grönlund, A. (eds.) Electronic Government, pp. 281–292. Springer,
Berlin (2007)
34. Benbasat, I.: An analysis of research methodologies. In: Warren, F. (ed.) The Information
Systems Research Challenge, pp. 47–85. Harward Business School Press, Boston (1984)
35. Arksey, P., Knight, T.: Interviewing for Social Scientists. Sage Publications, London (1999)
36. Kucukyazici, B., Keshavjee, K., Bosomworth, J., Copen, J., and Lai, J.: Best practices for
implementing electronic health records and information systems. In: Kushniruk, A.W.,
Borycki, E.M. (eds.) Human and social aspects of health information systems, IGI Global,
Hershey, PA (USA), pp. 120–138 (2008)
37. Heeks, R.: Health information systems: failure, success and improvisation. Int. J. Med. Inf. 75
(2), 125–137 (2006)
38. Castillo, V., Martinez-Garcia, A., Pulido, J.: A knowledge-based taxonomy of critical factors
for adopting electronic health record systems by physicians: a systematic literature review.
BMC Med. Inf. Decis. Making 10(1), 60
39. Pennarola, F., and Caporarello, L.: Enhanced Class Replay: Will this turn into better learning?,
In: Wankel, C., Blessinger, P. (eds.) Increasing Student Engagement and Retention Using
Classroom Technologies: Classroom Response Systems and Mediated Discourse
Technologies, pp. 143–162. Emerald Group Publishing Limited, Bingley (2013)
40. Scott, J.T., Rundall, T.G., Vogt, T.M., Hsu, J.: Kaiser Permanente’s experience of
implementing an electronic medical record: a qualitative study. Brit. Med. J. 331, 1313–
1316 (2005)
Technological Cycle and S-Curve:
A Nonconventional Trend
in the Microprocessor Market

G. Ennas, F. Marras and M.C. Di Guardo

Abstract In the literature there is agreement on the fact that battles between two
technologies sooner or later end with the dominance of one over the others, or,
under certain conditions, with their coexistence. The aim of this paper is to
understand if competition between rival technologies can be reopened after one
technology dominates the market. We argue that, if a technology has prevailed this
could not be a static situation, but rather a dynamic one. In doing so, we have
analyzed the microprocessor market, finding a nonconventional S-curve trend.

Keywords Technology life cycle  S-curve  Dominant paradigm  Coexistence

1 Introduction

In the 1942 Schumpeter coined the term creative destruction to denote a “process of
industrial mutation that incessantly revolutionizes the economic structure from within,
incessantly destroying the old one, incessantly creating a new one” [1]. Literature has
grown up following this revolutionary intuition, and some scholars have focused on
the determinants that permit the emergence of a technology over the others, defining
the technological cycle, which consists in three phases: technological discontinuity,
era offerment and establishment of dominant design [2]. The advent of a technological
discontinuity, in product or process, can disrupt an existing technological regime,
eventually leading to a new one. The period between the discontinuity and the
establishment of the new regime is a period of technological ferment, with high
uncertainty as both new and existing firms seek to identify which technologies,
markets and capabilities will be most valuable in the new regime. This is the period of
most rapid improvement in product performance, as technologists discover and
advance the capabilities of the new regime, and also the period where even incumbents

G. Ennas (&)  F. Marras  M.C. Di Guardo


University of Cagliari, Cagliari, Italy
e-mail: gianfrancoennas@unica.it

© Springer International Publishing Switzerland 2016 75


C. Rossignoli et al. (eds.), Organizational Innovation and Change,
Lecture Notes in Information Systems and Organisation 13,
DOI 10.1007/978-3-319-22921-8_7
76 G. Ennas et al.

are unlikely to achieve economies of scale due to rapidly changing designs and
technologies [3]. Several versions of breakthrough technology appear, because the
technology is not well understood and each pioneering firm has an incentive to dif-
ferentiate its variant from rivals. The era of ferment may persist for up to 20 years
before a technology prevails, and several standards may compete for years, even
decades, without one technology being locked as a dominant design [2, 4]. Thus, two
or more technologies may coexist under certain conditions, for instance some stay in
their niche, while others go on to penetrate mainstream segments and compete with
incumbent technologies [5]. There are not examples of technologies initially beaten
that subsequently subvert the dominant paradigm. Hence, this paper has the ambition
to explore if the technological adoption follows the same trend as we know from the
literature, or if in some markets it modifies its trajectories. Thus, the aim of this paper is
to understand if the battle for dominance between two rival technologies can be
reopened with a new era of ferment. In other words, we argue that if a technology has
prevailed over the others, this could not be a static situation but rather a dynamic one.
Answer this question is a great challenge, because if the answer is yes, we will have to
rethink if the technological cycle ever follows the same trend. In doing so, we analyze
the microprocessor market where it appears that our assumption would be confirmed.
While factors of dominance have been explored by a great amount of literature,
nothing has been said on this question. So, we think that investigate on this point could
open new ways to better understand determinants of innovation and open new
implications. The paper is structured as follows: the second paragraph presents a
literature review about the technology life cycle. The third paragraph is devoted to the
study of the microprocessor market. The fourth part explores evidences from smart-
phone and tablet markets, followed by the discussion paragraph, which identifies the
management implications and main limitations. Finally, conclusions propose an
indication of possible developments for further research.

2 The Technology Life Cycle: A Literature Review

Firms need to be able to position technologies within their life cycle, and to
understand the specific implications of this for managerial decisions [6]. Even if a
clear conceptualization of the life cycle of a technology is difficult, the Anderson
and Tushman’s technology evolution model (1990) is a central perspective and
represents the foundation of to the “macro view” of the technology life cycle. The
macro view considers individual technology cycles, each of which begins with a
period of technological discontinuity, characterized by advances in a process or in a
product that immediately lead to a second cycle, the period of ferment. This era sees
the competition among different variations of the original technology, and it is
divided into two phases, substitution and design competition [7]: once the superi-
ority of the new technologies has been demonstrated, they rapidly substitute the
older and the design competition begins. Then, when a technology is widely
adopted and associated with changes in the nature of competition within the
Technological Cycle and S-Curve … 77

corresponding industry, the design competition ends with the emergence of the
dominant design. It usually involves a synthesis of available technologies, resolu-
tion of competing technological standards, and perceptions of closure by user
groups [8]. This period could be followed by an era of incremental evolution of the
selected technology, characterized by evolutionary, continuous and incremental
changes, until a further technological discontinuity, when a new cycle begins. This
cyclical process of technological change is what Schumpeter (1934) named “cre-
ative destruction” [9]. Although there is a general agreement that the Anderson and
Tushman’s model concerns innovations of both products and processes, the
emphasis changes between these during the cycle. Indeed, during the era of ferment
the focus is on the product technology with the emergence of a dominant standard,
while in the era of incremental change greater emphasis is placed on the devel-
opment of processes that will improve the product technology [6]. The dominant
design needs not to be the best available, it needs only to gain a widespread
acceptance. An inferior one can win and, in this way, scholars have appealed to a
variety of factors explaining why a particular design rather than other ones emerges
as the dominant. In reviewing the dominant design literature, five groups of causal
mechanisms have been classified [10]: the technological predominance among
different functional characteristics of a technology; the economies of scale that can
be realized with standardized products; network externalities and their effects
(path-dependent processes); firms strategies; combination of historical, sociological,
political and organizational dynamics. Among these, economies of scale and net-
work externalities are the two conditions that create dynamic increasing returns and
even the design with the small lead will inexorably win a dominant position if
higher returns can be achieved with it. In particular, network externalities generates
when the utility that a user derives from consumption of the good increases with the
number of other agents consuming the good, who are in the same “network”. The
possible sources of network externalities could be direct physical effects, indirect
effects (e.g. the hardware-software paradigm) and post-purchase services [11].
Studying the process by which a technology achieves dominance when battling
against other technological designs, two broad groups of factors influencing the
outcome have been classified [13]: firm level factors and environmental factors.
There are a number of examples regarding the emerging of a technology over
another; among these, the most meaningful and cited are VHS versus Betamax [14]
and QWERTY versus other keyboards layout [15]. In the first case a better format
usability, the additional time available for recording and the widespread diffusion of
movie shops adopting the format increased the preference of VHS instead of the
better quality that characterized the Betamax format. In the second case, the first
product available with a new technology dominated most of the market; this is a
good example of lock-in and path-dependence caused by dynamics that go beyond
the behaviors of individuals, and show that, when a new technology is introduced
and spread so largely and quickly, it is quite impossible to come back to the old
one. The market diffusion of a technology is plotted by the S-curve [16], whose
common interpretation considers the cumulative adoption of the technology over
time, envisioning a number of phases such as embryonic, growth, maturity and
78 G. Ennas et al.

ageing. There are also alternative interpretations, but however plotted, the S-curve
reach saturation at maturity, when a new disruptive technology may emerge to
replace the old one. This period of technological discontinuity is characterized by
competing technologies with their own S-curve, which could be connected or
disconnected each other, in relation to the higher rate of performance. The resulting
situation is a technology progression characterized by multiple S-curves or tech-
nology cycles occurring over time [6]. Some scholars pointed out that the period of
ferment may indefinitely extend and not resolve with the dominance of a standard
among others, but the rival technologies may coexists under certain conditions [17,
18]. The coexistence of technologies changes the linear and systematic course of the
technology life cycle and it is generated when different competing technologies
occur simultaneously in the same market, without exclude each other. According to
the literature, the technology complexity, regulatory regimes and factors connected
with the intermediate and final markets demand [18], influence the interaction
among competing technologies, preventing the emergence of a clear winner or the
exit of losers [17]. When such dynamics exist, the distinct features create product
niches and consumer communities, gateway technologies, multi-channel end sys-
tems, appropriability regime and persistency. In particular, a niche is defined as
containing one consumer group or “class”: since each class has a distinct preference
set (e.g. a particular point in quality/price space), the number of potential market
niches is determined by the number of consumer classes that are initialized by the
modeler. It has been observed that the survival of the new technology requires the
establishment of a protected space in which further development can be achieved
[19]. This can take the form of distinct niche or sub-niche in the market, which may
be complementary to the established technology, or else take the form of public
sector support, where users are often also contributors to the R&D process. The
protection afforded by its niche has enabled the technology to be further developed
and improved [20]. A practical case is given by different types of flash memory card
[21]. The coexistence thus is highly probable in any case of similarity between
technologies. While the coexistence manifestation and duration is obviously dif-
ferent depending on the type of technology and on whether intervening factors,
surely each of these factors can individually or simultaneously affect the duration of
the competition between technologies, and determine the presence within the same
market. In such situations, the creative destruction does not seem to be the rule. It is
possible to assume a kind of “creative persistence” and a coexistence of different
technological solutions [18]. Another situation that moves away from the linearity
of the technology cycle is the re-emergence case, which occurs when a technology
fails at one time period, exits the market, but later returns. Following Raffaelli [22],
factors concerned with the re-emergence of a technology are: institutional shaping,
competing alternatives, rate of learning, market characteristics, firm strategic
positioning, key firm networks and firm age and size. Although new or discon-
tinuous technologies tend to displace older ones, technologies can re-emerge,
co-exist with, and even come to dominate newer technologies. This process seems
to be the creation—and re-creation—of product, organization, and community
identities [22].
Technological Cycle and S-Curve … 79

3 The Microprocessor Market

The microprocessor (CPU) is an essential part of any device running and operating
system (personal computer (PC), tablet, smartphone, server and so on). This
industry presents several advantages in studying technological cycles, in particular
[23]: (i) support many design, (ii) there are high switching costs between rival and
incompatible designs, due to hardware/software incompatibilities, (iii) presence of
high network externalities, (iv) high growth in both customers and the number
of competitors (v) the introduction of the International Business Machines
(IBM) PC effectively changed the nature of competition in the personal computer
market by introducing a clear standard architecture. Looking at the evolution of the
market structure, it is inevitable to note how competition evolved, since many prior
competitors were already eliminated by competition [29]. We can say that between
operating system (OS) and CPU there is reciprocal interdependence, that is, the
evolution of one of them influences the evolution of the other(s) [24]. In fact, since
the beginning of PCs diffusion, combination between CPU architecture and OS
played a central role. A practical example can be found in the middle 1970s, when
Zilog Z80 processor and CP/M OS became the dominant CPU and OS combination
of the period circa 1976–1983, and despite the great commercial success of the
Apple II and its OS, Apple was forced to produce a compatible card that allow to
install CP/M OS also in its computer. Simplifying, we can say there are funda-
mentally two architecture designs in microprocessor: RISC (reduced instruction set
computer) and CISC (complex instruction set). The question between them is
longstanding, and there was an important concern in the 1980s and 1990s, when
chip area and processor design were the primary constraints [25]. In the past
decades, the Intel and Advanced Micro Devices Inc (AMD) x86 (CISC CPU) has
dominated desktops and servers markets, while the ARM (RISC CPU) was in the
low-power embedded computing niche [25]. The companies have two different
strategies: while ARM designs and just sells licenses to producers (Mediatek,
NVIDIA, Qualcomm and so on), INTEL and AMD design and produce their own
products. Today, the x86 architecture is arguable the only chip which retains CISC
architecture, though newer Intel’s processors in some ways are hybrid and called
“CRISC”. RISC CPUs were considered superior for many technical points [26].
The emerging of a superior but incompatible technology often exacerbates the
dilemma for incumbents, because the adoption of it can increase the chance of
enhancing the performance of their products, but the incompatibility sharply
reduces customer benefits due to network effects. Intel faced this sort of dilemma in
the early 1990s, when the RISC architecture challenged the CISC technology [27].
The main reason why RISC architecture did not win was the alliance between IBM,
Intel and Microsoft. In the 1981 IBM launched the Personal Computer, with Intel
supplying the microprocessor and Microsoft the OS. As a group, this triad created
the microcomputer format that, within a few years, drove both the Apple II and the
previously dominant CP/M OS to the periphery of the market. Later, this IBM PC
constellation slowly fell apart, but Microsoft and Intel went on to develop the
80 G. Ennas et al.

powerful “Wintel” alliance, which established the dominant industry standard [28].
IBM would not purchase a device unless it was made by at least two companies, so
they would contract with other manufacturers to make their design. Having other
companies manufactured this design, or compatible parts, also increased the market
share of that architecture. In the 1976 AMD and Intel signed a cross-license
agreement, and for years AMD made and licensed most everything Intel made;
AMD also licensed various peripheral chips to Intel. By 1985, the Intel micro-
processor was embodied in the majority of personal computers shipped (55 % or
175 out of 277 firms shipping personal computers used an Intel microprocessor)
[23]. Notwithstanding, in the 1987 the cross-licensing agreement between AMD
and Intel terminated, a standard was established and the rival architecture was cut
off from the PC and server markets. History and literature teach us that, when
industries are characterized by network externalities, the installed base technology
and the availability of complementary goods will play major roles in user adoption.
An insufficient installed base or lack of complementary goods may result in tech-
nology lockout [29]. As we have seen above, the reason why CISC processor has
won is not due to a technical supremacy over RISC but, as happened in the previous
examples (VHS vs. Betamax and QWERTY keyboard), to a series of factors. In
particular, the agreement between Intel, Microsoft and IBM with its commercial
capacity drove the RISC architecture on the periphery, especially in the embedded
systems. Again, in ICT industries network externalities are more pervasive than in
others [30]. Network externalities are “the value or effect that users obtain from a
product or service will bring about more values to consumers with the increase of
users, complementary product, or service’’ [11]; in particular, indirect network
externalities exist “when the utility of a product increases with the greater avail-
ability of compatible complementary products” [12]. For instance, the value of a PC
is influenced by the level and the variety of the supply of applications that is
possible to utilize with it. From this statement we can easily understand why once a
combination between OS and CPU architecture is established it generates high
switching costs and then lock-in, because semiconductor manufacturers tend to
produce unique and incompatible designs. Both PC software and drivers for
peripherals must be designed around the microprocessor, and switching to another
one can be extremely costly; it might involve extensive redesign of the product, or a
total washout of costs incurred in the development of customized software [31].
Switching costs also go well beyond the product changes to include the costs
associated with coordinating a product component change within the organization
as well as between suppliers and customers. A firm attempting to modify a design
will face costs due to modifying documentation, increased communication between
marketing, engineering and production, obsolete inventory, and the lost time of key
personnel which need to deal with the unknowns associated with quality and
performance variations in their product [23]. In addition, the manufacturer must
undertake search costs (both money and time, involving in some cases both sup-
pliers and buyers), set up new external relationships, and face uncertainties in input
quality [32].
Technological Cycle and S-Curve … 81

4 Smartphone and Tablet Markets

As seen above CISC and RISC architecture have coexisted for decades, the first one
in the PC and server markets, the second one in the embedded systems. In this
paragraph we want to investigate if the advent and the rise of new products can
change the technology adoption in the CPU market. Over the last years, the mobile
phone has evolved from a device for making calls to one that has become the
central point of access to our digital lives. It offers more advanced computing
abilities and connectivity, allows users to install and run various applications based
on a specific platform, as it runs complete OS, providing a platform for application
developers. These advanced mobile devices possess powerful processors, abundant
memory, larger multi-touch screen and a virtual keyboard with e-mail, web
browsing and wifi connectivity.1 The tablet or tablet PC, is a portable computer that
uses a touch screen as its primary input device. It is slightly smaller and weigh less
than the average laptop, and it integrates the benefits of a PC offering the conve-
nience of a mobile device. It had its rise with the launch of Apple’s iPad in 2010,
and now the sudden rush of devices flooding the market is a proof of their
increasing popularity. According to Gartner, in the 2013 about 195 millions of
tablets were sold. There is a symmetry between PC and tablet/smartphone indus-
tries, in fact there are low buyer switching costs between models that embody a
similar product design (e.g. different brands with the same OS), but there are high
buyer switching costs between rival product designs (e.g. different OS or CPU
architecture or both).Therefore, it is clear the presence of network externalities: the
benefit to own a device depends also on its diffusion and installed users base, and
on the amount of complementary goods, in particular software available. The point
is that, as regards to the dominant design, a clear one is emerging, in fact the
ARM-based CPUs have achieved a more than 95 % penetration of mobile handsets
[33]. In considering these premises, to try to answer our research question we have
analyzed the ARM annual reports and accounts (2012–2013) [33], the Intel [34] and
AMD [35] form 10-k (2012–2013). The USA 10-k form requires to indicate
business information in the Item 1, in particular to “include recent events, com-
petition, regulations, and labor issues”.
We have particularly checked:
(1) If incumbents (in the desktop and server markets)—Intel and AMD—rec-
ognize ARM as a challenge in their core business.
(2) If new entrants—ARM—recognize the opportunity to enter other markets.
Findings:
(1) Intel states that “new competitors are joining traditional competitors in their
core PC and server business areas, where they are leading provider, while they face
incumbent competitors in adjacent market segments they are pursuing, such as
smartphones and tablets”. Intel competitors include: AMD, IBM, Oracle
Corporation, as well as ARM architecture licensees from ARM Limited, such as

1
Source PC Magazine.
82 G. Ennas et al.

QUALCOMM Incorporated, NVIDIA Corporation, Samsung Electronics Co., Ltd.


and Texas Instruments Incorporated. AMD argues that Intel’s dominant position in
the microprocessor market and integrated graphics chipset market, its existing
relationships with top-tier original equipment manufacturers (OEMs) and its
aggressive marketing and pricing strategies could result in lower unit sales and a
lower average selling price for its products, which could have a material adverse
effect on them. Other AMD competitors “include companies providing or devel-
oping ARM-based designs as relatively low cost and low power processors for the
computing market including netbooks, tablets and thin client form factors, as well
as dense servers, set-top boxes and gaming consoles”. ARM Holdings designs and
licenses its ARM architecture and offers supporting software and services. Its ability
to compete with companies who use ARM based solutions depends on its ability to
design energy-efficient, high-performing products at an attractive price point. In
addition, Nvidia builds custom CPU cores based on ARM architecture to support
tablets and small form factor PCs, servers, workstations and super computers. AMD
also declares its willingness “to transform the business to reach approximately 50 %
of revenue from high-growth markets by the end of 2015. AMD also states that they
will sample their first ARM technology-based processor for servers in the first
quarter of 2014”.
(2) ARM confirms to keep over the 95 % of the market share in the smartphones
and tablets markets, with an increasing by more than 100 % year-on-year. ARM
reported its customers shipped more than 10 billion ARM-based chips into
everything from phones and tablets to smart sensors and servers. “ARM faces
competition both from large semiconductor companies and from smaller compa-
nies”. Regarding big competitors, Intel is developing processors for use in PCs and
servers, and is looking to deploy these chips in markets such as tablets, mobile
phones and embedded markets, including the Internet of Things. Any success by its
competition would result in a reduction in royalty revenue to ARM. The future
opportunity: ARM expects that its customers will continue to re-equip their R&D
teams with the latest processors for existing product lines. In addition, ARM’s
technology is becoming increasingly relevant to growing markets such as sensors,
computers and servers, leading to more new customers acquiring their first ARM
license. Again, we have checked if ARM based products are entering in the con-
sumers PC, and we have found that Chromebook—a notebook shipped with
Google OS—within its range of 17 different models, accounts 4 shipped with
ARM CPU, 3 made by Samsung and 1 by HP. As of May 2013, the Samsung ARM
Chromebook has led Amazon’s list of best-selling laptop.

5 Discussion

In order to understand the trend of CPU technological cycle, it is crucial to


investigate the corporate strategies both for CPU and OSes, that we argue are facing
the innovator’s dilemma [36]. To start with order, we first look at CPU market
Technological Cycle and S-Curve … 83

leader strategies, than the incumbent one and finally the OS maker one. As seen
above, Intel is the leader in the desktop and server industries, and to keep its
supremacy it has decided to exploit its technology; in fact, it “is innovating around
energy-efficient performance”, and it is “accelerating the process technology
development for its Intel Atom processor product line to deliver increased battery
life, performance, and feature integration”. Intel recognizes to be a relatively new
entrant in the tablet market segment, and it is trying to offer optimized architecture
solutions for multiple operating systems and application ecosystems. It also rec-
ognizes that boundaries between the various segments are changing, as the industry
evolves and new segments emerge. Conversely, AMD has ever had a smaller
market share in the desktop and server markets, thus, it has decided to adopt an
ambidextrous strategy. With this strategy AMD is trying to be able to both explore
into new spaces as well as exploit their existing capabilities [37]. In fact, AMD is
differentiating its strategies by licensing ARM, in addition to its x86 processors.
Software makers have to be able to manage that innovation, in fact, Microsoft, as a
leader in the desktop and notebook OS markets, has recognized the threat of new
devices. In particular it declares that (form 10-k 2013) [38] its system faces com-
petition from various commercial software products and from alternative platforms
and devices, mainly from Apple and Google. Consequently, it has adapted its
strategy, releasing Windows 8, the first version of the Windows operating system
that supports both x86 (CISC) and ARM (RISC) chip architectures. Conversely,
software developed for the Android OS may run in every architecture because,
simplifying, just like java, it uses a virtual machine to run software [39].
Considering these premises, we think with reasonable evidences that the S-curve
follows a different trend in this market, and almost three decades after the alliance
between Intel and Microsoft that drove RISC processor out of PC and server
markets and signed the emerging of the dominant paradigm, the challenge is
reopened: the first phase has been the affirmation of CISC technology, followed by
a long period of incremental improvement; meanwhile, the RISC technology gained
lower adoption, up to the advent of smartphone and tablet, that caused a rapid rise
of RISC architecture. We can assume therefore that the S-curve might follow the
trend proposed in figure B, that is different from the common interpretation figure
A, which considers that, once a technology prevails, keeps its supremacy until a
new disruptive technology enter and defeats the market. Indeed, in the CPU
industry two technologies have coexisted, the CISC dominating the market and the
RISC relegated to the embedded segment, but with the advent of new devices
(tablets and smartphones) the adoption of RISC systems is experiencing a rapid
growth with a sudden change in the curve concavity. According to the analysis
presented above, they are currently facing a “new era of ferment”, and basically
three future scenarios can be envisaged:
(1) The CISC technology maintains its supremacy and follows the trend
described by the yellow curve, while the RISC one follows a lower trend, described
by the green curve.
84 G. Ennas et al.

(2) The RISC technology imposes its own standard in the market segments
currently dominated by CISC, and follows the trend described by the blue curve,
while the CISC one proceed along the lower trend described by the red curve.
(3) Both technologies coexists in different market segments, without exclude
each other.

Regardless of how things go actually, it is clear that this trend of S-curves is very
different from what we know.

6 Implications

The findings of this study have several implications for managerial practice and
technology, organization and strategy. Although the analysis of these implications
is crucial from a strategic point of view, it goes beyond the aim of this paper, hence
we shortly indicate them. First of all, we have to consider that the processor market
generates a turnover of around 300 billion dollars, and this type of trend is moving
earnings from a technology to another. Secondly, devices equipped with a CPU are
complex systems, therefore implications will affect the software and, in particular,
Technological Cycle and S-Curve … 85

operating systems; hence, the implications we stated above are valid also for the
software. Thirdly, firms making technology investment decisions need to com-
pletely understand the competing technologies dynamics, because the emergence of
an alternative and potentially superior technology does not necessarily mean the
failure of the incumbent, because different scenario would be traced. Fourthly, firms
also need to look inward to identify competencies they need to ensure they have the
absorptive capacity to adopt new technologies and respond quickly to technological
changes. Fifthly, strategic alliances between hardware and software makers may
lead to as happened before (i.e. Wintel alliance). Finally, indirect network exter-
nalities may play a crucial role, because the amount of complementary products and
services available can strongly contribute to the affirmation of a technology over
another.

7 Conclusions

In this paper we have analyzed the technological cycle, with the goal of under-
standing if the battle for dominance between two rival technologies can be reopened
with a new era of ferment. We have explored the CPU market finding that the era of
ferment may be restarted between different technologies also after a long period of
time, and technologies competing in distinct segments race each other. These
results suggest that the S-curve may have a different trend and propose a non-
conventional view of the technology adoption process. This paper presents several
limitations, in particular debates on in doing events and maybe the amount of
available data is not enough to delineate a clear scenario. However, we believe that
in addition to these preliminary considerations, this research has thrown up many
questions regarding the technology diffusion in need of further investigation.
Although we have evidence from the microprocessor market, the insights of this
study should be confirmed in other context to extend, generalize and eventually
improve technological cycle literature. If it is true that not even the best technology
wins, we have shown that this could be a dynamic position and the era of ferment
may be re-opened.

Acknowledgement Fabiana Marras gratefully acknowledges Sardinia Regional Government for


the financial support of her PhD scholarship (P.O.R. Sardegna F.S.E. Operational Programme of
the Autonomous Region of Sardinia, European Social Fund 2007–2013—Axis IV Human
Resources, Objective l.3, Line of Activity l.3.1.).

References

1. Schumpeter, J.A.: Socialism, capitalism and democracy. Harper and Brothers (1942)
2. Tushman, M.L., Rosenkopf, L.: Organizational determinants of technological change: towards
a sociology of technological evolution. Res. Organ. Behav. 14, 311–347 (1992)
86 G. Ennas et al.

3. Utterback, J.M.: Mastering the dynamics of innovation: how companies can seize. Harvard
Business School Press, Boston (1994)
4. Schilling, M.: Technology success and failure in winner-take-all markets: the impact of
learning orientation, timing, and network externalities. Acad. Manag. J. 45(2), 387–398 (2002)
5. Adner, R., Zemsky, P.: Disruptive technologies and the emergence of competition. Soc. Sci.
Res. Netw. (2003)
6. Taylor, M., Taylor, A.: The technology life cycle: conceptualization and managerial
implications. Int. J. Prod. Econ. 140(1), 541–553 (2012)
7. Anderson, P., Tushman, M.L.: Technological discontinuities and dominant designs: a cyclical
model of technological change. Adm. Sci Q 604–633 (1990)
8. Pinch, T.J., Bijker, W.: The social construction of facts and artifacts. Technol. Soc. 107 (1987)
9. Schumpeter, J.A.: The theory of economic development: an inquiry into profits, capital, credit,
interest, and the business cycle, vol. 55. Transaction Publishers (1934)
10. Murmann, J.P., Frenken, K.: Toward a systematic framework for research on dominant
designs, technological innovations, and industrial change. Res. Policy 35(7), 925–952 (2006)
11. Katz, M.L., Shapiro, C.: Network externalities, competition and compatibility. Am. Econ.
Rev. 75, 424–440 (1985)
12. Basu, A., Mazumdar, T., Raj, S.P.: Indirect network externality effects on product attribute.
Market. Sci. 22–2, 209–221 (2003)
13. Suarez, F.F.: Battles for technological dominance: an integrative framework. Res. Policy 33,
271–286 (2004)
14. Besen S.M., Farrell J.: Choosing how to compete: Strategies and tactics in standardization.
J. Econ. Perspect. 8(2), 117–131 (1994)
15. David P.A.: Clio and the economics of QWERTY. Am. Econ. Rev. 75, 332–337 (1985)
16. Foster, R.N.: Innovation: the attacker’s advantage, vol. 152. Summit Books, New York (1986)
17. Nair, A., Ahlstrom, D.: Delayed creative destruction and the coexistence of technologies.
J. Eng. Tech. Manage. 20(4), 345–365 (2003)
18. Galvagno, M., Faraci, R.: La coesistenza fra tecnologie: definizione ed elementi costitutivi.
Sinergie rivista di studi e ricerche, pp. 64–65 (2011)
19. Rosenberg, N.: Inside the black box: technology and economics. Cambridge University Press,
Cambridge (1983)
20. Windrum, P., Birchenhall, C.: Structural change in the presence of network externalities: a
co-evolutionary model of technological successions. J. Evol. Econ. 15(2), 123–148 (2005)
21. De Vries H.J., de Ruijter, J.P.M., Argam, N.: Dominant design or multiple designs: the flash
memory card case. Technol. Anal. Strateg. Manag. 23(3), 249–262 (2011)
22. Raffaelli, R.: Mechanisms of technology re-emergence and identity change in a mature field:
Swiss watchmaking. In: Academy of Management Proceedings, vol. 2013, No. 1, p. 13784
(2013)
23. Tegarden, L., Hatfield, D., Echols, A.: Doomed from the start: What is the value of selecting a
future dominant design? Strateg. Manag. J. 20, 495–518 (1999)
24. Thompson, J.D.: Organizations in action. McGraw-Hill (1967)
25. Blem, E., Menon, J., Sankaralingam, K.: Power struggles: revisiting the RISC vs. CISC debate
on contemporary ARM and x86 architectures. Appears in the 19th IEEE International
Symposium on High Performance Computer Architecture HPCA (2013)
26. Krad, H., Al-Taie, A.Y.: A new trend for CISC and RISC architectures. Asian J. Inform.
Technol. 6(11), 1125–1131 (2007)
27. Lee, J., Lee, J., Lee, H.: Exploration and exploitation in the presence of network externalities.
Manag. Sci. 49(4), 553–570 (2003)
28. Gomes-Casseres, B.: Competitive advantage in alliance constellations. Strateg. Organ. 1(3),
327–335 (2003)
29. Semmler, A.: Competition in the microprocessor market: intel, AMD and beyond. University
of Teier, pp. 1–7 (2010)
30. Lin, C.-P., Tsai, Y. H., Wang, Y-J., Chiu, C.-K.: Modeling IT relationship quality and its
determinants: a potential perspective of network externalities in e-service, p. 2. Elsevier (2010)
Technological Cycle and S-Curve … 87

31. Choi, J.P.: Irreversible choice of uncertain technologies with network externalities.
Department of Economics, Columbia University (1992)
32. Garud, R., Kumaraswamy, A.: Changing competitive dynamics in network industries: an
exploration of Sun Microsystems’ open system strategy. Strateg. Mang. J.
33. http://ir.arm.com/phoenix.zhtml?c=197211&p=irol-reportsannual. Accessed June 2014
34. http://www.intc.com/annuals.cfm. Accessed June 2014
35. http://ir.amd.com/phoenix.zhtml?c=74093&p=irol-reportsannual. Accessed June 2014
36. Christensen, C.M.: The innovator’s dilemma: when new technologies cause great firms to fail.
Harvard Business Press (1997)
37. O’Reilly III, C.A., Tushman, M.L.: Ambidexterity as a dynamic capability: resolving the
innovator’s dilemma. Res. Organ. Behav. 28, 185–206 (2008)
38. http://www.sec.gov/Archives/edgar/data/789019/000119312513310206/d527745d10k.htm
39. Ehringer, D.: The dalvik virtual machine architecture. Tech. Rep. (2010)
The IS Heritage and the Legacy of Ciborra

Paolo Depaoli, Andrea Resca, Marco De Marco and Cecilia Rossignoli

Abstract Ten years is a good distance at which to assess Claudio Ciborra’s legacy
to Information Systems Studies and Organizational Studies. The paper compares
the scholar’s seminal work, The Labyrinths of Information, with the thematic
papers published in 30 special issues/sections of four top IS journals. The results
show clearly that Ciborra’s concepts have now gained much wider currency,
especially in the study of phenomena such as local meaningful practices (e.g.
bricolage, improvisation, cultivation). They contribute to the swing toward a more
praxis-oriented attitude in the IS discipline.

Keywords Information infrastructure  Strategizing  Platform organization 



Ontology Epistemology

1 Introduction

2015 marks the 10th anniversary of Claudio Ciborra’s death. The scholar’s work is
well known to the authors, one of whom had the honour of collaborating with him
personally and the paper pays tribute to his memory by revisiting the conceptual
pillars on which he built his research. The distinction between entitative conceptions

P. Depaoli (&)  A. Resca


CeRSI-LUISS Guido Carli University, Rome, Italy
e-mail: pdepaoli@luiss.it
A. Resca
e-mail: aresca@luiss.it
M. De Marco
Uninettuno, Rome, Italy
e-mail: marco.demarco@uninettunouniversity.net
C. Rossignoli
Department of Business Administration, Università degli Studi di Verona, Verona, Italy
e-mail: cecilia.rossignoli@univr.it

© Springer International Publishing Switzerland 2016 89


C. Rossignoli et al. (eds.), Organizational Innovation and Change,
Lecture Notes in Information Systems and Organisation 13,
DOI 10.1007/978-3-319-22921-8_8
90 P. Depaoli et al.

and process conceptions contributes to clarify his inquiries [1]. Highlighting enti-
tative aspects of a phenomenon means to focus on general principles such as
abstractions and representations to be applied across different situations. On the
contrary, process aspects of a phenomenon focus on the emergent, contingent, and
locally specific reality. Ciborra, all along his research activity, adopted a strong
process-oriented worldview. He underscored the shortcomings of entitative con-
ceptions when investigating organizations and information systems as
social-technical phenomena which are continuously evolving, subject as they are to
minor or major changes. The aim is to discover to what extent IS scholars have
incorporated the key tenets of Ciborra’s original thinking into their research agendas
since the publication in 2002 of his seminal work The Labyrinths of Information:
Challenging the Wisdom of Systems, herein shortened to Labyrinths [2].
A review of the contemporary and later literature shows that the process-oriented
view taken by Claudio Ciborra placed him firmly in the minority 10 years ago. That
there was indeed a dominant paradigm was concluded by Orlikowski and Baroudi in
1991 after analyzing 155 articles that appeared in four of North America’s leading
journals from 1983–1988 and finding that 96.8 % were underpinned by positivistic
epistemology [3] which usually entails an entitative ontology. The positivist infor-
mation systems research approach was defined by the authors as follows:
Ontologically, positivist information systems researchers assume an objective physical and
social world that exists independent of humans, and whose nature can be relatively un-
problematically apprehended, characterized, and measured. (ib., p. 9)

That that approach still predominated ten years later was confirmed by the
survey conducted by Chen and Hirschheim [4], who found that 81 % (86 % in the
US journals) of the 1893 articles published in eight European and North American
journals from 1991–2001 had a positivist leaning. Nevertheless, a tremor of change
was observed over the next decade and, according to Paucar-Caceres and Wright
[5], based on the results of a literature review of six journals issued from 1999 to
2009: “Information Systems Research is moving away from the
normative/positivistic paradigm associated with ‘hard-oriented’ methodologies. We
identified a total of 145 articles drawing on interpretative, critical and constructivist
methodological approaches” (ib., p. 598). This indicates that Ciborra’s writings
arrived at precisely the moment when a larger number of IS scholars started to
follow a process-oriented worldview.
The aim of the paper is to help write the history of IS theoretical literature and thus
contribute to the IS discipline’s quest “to articulate and claim a heritage” [6, p. 2].
To respond to the research question alluded to earlier, i.e., to what extent have IS
scholars incorporated the key tenets of Ciborra’s original thinking into their
research agendas since the publication in 2002 of his seminal work Labyrinths, the
authors chose a two-step approach to the hermeneutic circle: first, they read and
analyzed Labyrinths to identify the keywords and main concepts; second, they
examined 30 special issues/sections published by the following four journals from
2004 to date: European Journal of Information Systems (EJIS, 9 issues), Journal of
The IS Heritage and the Legacy of Ciborra 91

Information Technology (JIT, 8), Journal of Strategic Information Systems (JSIS,


7), and MIS Quarterly (MISQ, 6). This second phase served to identify papers in
which process-oriented conceptions prevail in order to contextualize the legacy of
Ciborra’s work since his demise. The decision to focus on special issues of top IS
journals rather than, for example, on the literature which has Ciborra’s work quoted
is based on the fact that these issues are considered indicators of the main trends in
the discipline and therefore more relevant for contextualizing his legacy.
The introduction is followed by an overview of Ciborra’s ontological and
epistemological approach. Section 3 recaps our understanding of his key tenets and
concepts. Section 4 presents and comments the results of the literature review of the
special issues. Section 5 closes the paper with the authors’ concluding remarks.

2 Ontology and Epistemology in Ciborra’s Work

Ciborra’s work rests on the pillars of ontology and epistemology so any presen-
tation and assessment of his legacy to Organization Studies (OS) and Information
Systems Studies (ISS) would not be complete without due recognition of these two
aspects. This section therefore will frame his research according to the object of
research (ontology) and the means used (epistemology), and, specifically, their
combined use.
Ontology and epistemology are the essence of research activity. Ontology is the
study of being, of what exists and is thinkable [7]. Epistemology refers to the modes
through which knowledge, related to a specific entity, is acquired. Theoretical
perspectives, methodologies, and methods deal with epistemology or rules followed
in order to gain knowledge that has been validated scientifically.
Individuals and organizations Until the mid-1990s, the ontological perspective
favoured by Ciborra revolved around the fact that both OS and ISS consider
humans as being equipped with bounded rationality [8]. However, the transaction
cost theory [9] argued that humans are not only limited from a rational viewpoint
but can behave opportunistically by adopting what is called strategic rationality. But
an organization cannot create the conditions that promote rational behaviours and
prevent opportunistic behaviours unless they assign equal importance to both
learning and adaptive rationality [10]. Therefore, organizations and information
systems should be seen as tools for enhancing learning and spreading knowledge.
By the late 1990s, Ciborra had significantly changed his approach: individuals
are entities who navigate, discover, and encounter the world relentlessly according
to a mood-affected caring. Besides, understanding is not the result of a cognitive
evaluation of pros and cons in a specific situation but a human attitude in which the
opening to possibilities and continuous caring about events, resources, behaviours,
and problems prevail. Ciborra thus started to draw on phenomenology, mainly the
thinking of Husserl and in particular Heidegger, focusing on two main aspects: the
‘platform organization’ and the ‘information infrastructure’.
92 P. Depaoli et al.

The platform organization Ciborra investigated the organizational structure


concept that forms the core of OS. The transaction costs theory provides a com-
prehensive framework for designing governance structures that optimize transaction
costs. However, the reduction of coordination costs and transaction costs is not a
determinant in a scenario of technological innovation, intense market competition,
and a continuously evolving business environment. To the contrary, the platform
organization is considered a solution [11]. As a meta-organizational form, the
platform organization emerges as the combination of organizational routines and
pre-existent structures at hand and re-used for a new endeavour. What at first sight
is a hierarchy or matrix is actually the effect of a meta-level constituted by the
platform. Challenges posed by market contingencies, a technological innovation, or
a competitor’s moves are confronted with a virtual and collective cognitive scheme
that recombines assets continuously, according to opportunistic moves and
improvisation. The ontological perspective provided by the platform organization
differs significantly from the more traditional approaches that are limited to a
specific configuration of roles and their connection, or to a form for governing
transactions. This is where the capacity to recombine continuously pre-existent
structures for dealing with primary tasks emerges.
The information infrastructure This further example of Ciborra’s ontological
viewpoint identifies an information infrastructure as “a shared, open (and
unbounded), heterogeneous and evolving socio-technical system (which we call an
‘installed base’) consisting of a set of IT capabilities and their user, operations and
design communities” [11, p. 14]. Here, the concept of socio-technical system comes
to the fore: it considers reality as series of interactions made up of both social and
technical factors: on the one hand, user, operation, and design communities, and, on
the other, IT capabilities. Actor-network theory [12] proposes a similar lens for
investigating information infrastructures while considering the technology itself as
an actor. Humans and nonhumans are at the same level and technology is conceived
as an entity with its own life that interacts with other entities. Ciborra turns instead
to the concept of Ge-stell [2, p. 55].
Given the scope of the paper, a brief and therefore somewhat overly simplified
interpretation of Gestell, which is German for ‘scaffolding’, is called for. Ciborra,
following Heidegger [13], uses the etymology of the term ‘Gestell’ to enrich the
original meaning of ‘structural’ with ‘processual’ to signify the pervasive process of
the arranging, regulating and ordering of resources that embraces both human and
natural resources. This unfolding of technology is an historical process fed by the
development of ‘calculative thinking’; in fact, in the 1950s Heidegger had already
grasped the key role that computers, despite their infancy, were set to play. To liken
the information infrastructure to Gestell opens the ontological horizon to emphasize
both the structural, static factors (the ‘scaffolding’) and the dynamic factors (the
ordering process). Ciborra used this new approach to philosophically ground
important aspects of his research, e.g., the inherent self-feeding process of infor-
mation infrastructures.
The IS Heritage and the Legacy of Ciborra 93

3 Ciborra’s Key Concepts

Section 2 has illustrated how Ciborra went on to develop both sides of the coin, i.e.,
what he thought to be real—and therefore relevant (albeit neglected by the main-
stream literature) for both information systems and organization scholars—and the
methodology that could better uncover that relevant reality. Table 1, below, sum-
marizes what we have identified as the main concepts and the keywords of
Labyrinths. Applying the hermeneutic circle, we read and discussed the book
several times in order to deepen our knowledge of Ciborra’s work and its insights
and then searched for consensus on the main tenets of his contributions. The left
column of Table 1 lists the titles of the chapters of Labyrinths while the right
column lists the keywords and the concepts that, in the authors’ opinion, convey the
chapter’s main message. Table 1 is followed by a brief commentary on each chapter
of Labyrinths, including identification of the key words, to lay the ground for the

Table 1 Labyrinths keywords and concepts


Chapter title Keywords and concepts
Invitation • Authentic versus inauthentic ways
• Phenomenological understanding
Krisis • Entrenchment in methods borrowed from natural science paradigms;
consequent blindness to empirical reminders (p. 19)
Bricolage • Design and surprises (during implementation)
• Effective adaptation and learning produces the exploitation of unique,
intangible characteristics which yield competitive advantage
• Heuristics are superior to high theory (p. 45)
• Context reconstructed-in-action (p. 46) and innovation (p. 47)
Gestell • Information infrastructure
• Installed base and path dependency (p. 62)
• Formative context (p. 70)
• New management agenda (p. 77) based on localism and context
• Crucial notions in the present discovery stage are: drifting, bricolage,
hospitality, and cultivation
Derive • Key research question raised by Ciborra: how should we relate to such
complex and dynamic infrastructures?
• Unexpected effects
Xenia • Hospitality and technology; promoting a new commitment: an attitude
of openness and, simultaneously, of suspicion toward technology
(p. 118)
Shih • Organization platform metaphor (p. 151) where strategy, action, and
structure coalesce to cope with surprises (p. 122)
• Deconstructivism (p. 151)
Kairos (and • Improvisation as situated action
Affectio) • Extemporaneity (p. 156)
• Relevance of moods and emotions (pp. 160–161)
Methodological • The organizational phenomenon: ‘illusory appearances’ versus
appendix ‘apparitions’
Source: the authors
94 P. Depaoli et al.

discussion in Sect. 4, in which some of those comments will be scrutinized in more


detail.
Invitation Ciborra reacted to the dominance of the positivist approaches,
deeming them too abstract and too far removed from practice (‘inauthentic’, in his
words). In ‘Invitation’, the introduction to Labyrinths, he proposes “a different
perspective on information systems [that] should be anchored to the unfolding
of the human process of encountering the everyday world” [2, p. 6]. His use of
philosophy (and, more precisely, of the phenomenology of Husserl and of
Heidegger) served as a pointer to “an alternative centre of gravity: human existence
in everyday life” (ib., p. 1). Ciborra therefore suggests following the ‘authentic’ trail
to investigate what is overshadowed by current theoretical perspectives, what is
deemed unimportant or object of superficial attention: the unfinished, the untidy, the
irregular, and the hack.
Krisis Many of the business cases indicated by Ciborra show that most man-
agement models and methods are incapable of dealing with the real world, built as
they are on the natural sciences paradigm. At the origin of the crisis then, there is
the gap between what is measured, formalized, and calculated according to scien-
tific principles and the object of investigation.
Bricolage The meaning given by Ciborra to this term is “tinkering through the
combination of resources at hand” (ib. p. 49) to underline the importance of paying
attention to non-systematic ways of organizing and executing work (see
‘Strategizing and design-in-action’ in Sect. 4, below).
Gestell As we saw in Sect. 2, this concept was enriched by Ciborra but how is it
possible to deal with the overarching, evolving presence of both a structure and an
ordering process? Ciborra suggests focusing on bricolage, drifting (see Derive
below), cultivation (no strong top-down direction), and hospitality (an attitude of
openness and, simultaneously, of suspicion toward technology)
Derive According to Ciborra, this key aspect of technology lies in its situated
character. That is, any description of an object outside its context of use is abstract
because it is separate from the relations that are established with it (by persons,
organizations and other objects) during its use. As well as drawing explicitly on
Heidegger’s Being and Time for technology-in use, he adopts the concept of ‘af-
fordance’ introduced by Donald Norman’s The Design of Everyday Things: what
people perceive artifacts can do. One consequence of this approach is that a number
of unexpected effects are likely to occur when moving from design to implemen-
tation. For example, in Ciborra’s words: “’what groupware is’ can only be ascer-
tained in situ, when the matching between plasticity of the artifact and the
multiform practices of the actors involved takes place.” (ib., p. 87). Therefore
information infrastructures have enhancing effects but they also drift from initial
plans. The way to deal with complex and dynamic infrastructures then is to use
practical intelligence and tactics that are capable of filling the gaps between the
highly formalized procedures and the real world.
Xenia This Greek word summarizes the relations that occur between a guest and
his host; Ciborra uses it as a metaphor for relating to technology: “…hospitality
involves the risk of misunderstanding, since it typically has to deal with
The IS Heritage and the Legacy of Ciborra 95

communication across different languages and cultural modes” (ib., p. 115). The
appropriate care of guest-technology bears rewards in terms of innovation and
learning.
Shih This Chinese war strategy concept refers to the exploitation of the
configuration of the resources at hand. Organizing (the resources at hand) is to build
identity across discontinuities (ib., p. 128) so that strategy, action, and structure
coalesce to cope with surprises (ib., p. 122).
Kairos (and Affectio) Ciborra dedicates an entire chapter to the importance of
improvisation in dealing with unforeseeable events. Improvisation is based on the
ability to intuitively surpass rationality by drawing on the deepest wells of personal
resources: moods and emotions. This leads to decisive moment of vision in which
the most appropriate solution emerges at the most appropriate moment (kairos).
Methodological appendix (Odos) The appendix better illustrates what was
introduced in ‘Invitation’. Ciborra describes the two types of evidence he
encounters when approaching an “organizational phenomenon”: (i) “the set of ideas
and models taken for granted in the domain of organization theories or consulting
models… [which] following Heidegger we can refer to … as illusory appearances”
(ib., p. 176), and (ii) ‘apparitions’ which belong to a space that cannot be filled by
any model and that surface in informal talks that “host the unexpected aspects of
organizational life.” (ib., p. 177). According to Ciborra, investigation often stops at
the empty models instead of working on the apparitions that tell about the
“underlying phenomenon to be unveiled” (ib., p. 178).

4 The Post-Labyrinths Literature and Ciborra’s Key


Concepts

The results of the comparative analysis of Ciborra’s work and the articles of the
special issues/sections enabled the authors to identify the following matches.
Emerging challenges The concept of odos (way, road), which Ciborra used to
name the methodological appendix of Labyrinths, was the focus chosen by Sawyer
and Winter for their op-ed to the 2011 JIT “special issue on futures for research on
information systems” [14]. As seen earlier, Ciborra adopted a non-mainstream
method (phenomenology) to draw attention to emerging (and often overlooked)
phenomena. Sawyer and Winter stressed the need to explore different approaches to
shed light on a number of issues that still seem to lead the research 10 years after
Ciborra’s publications. The need to participate in more than one intellectual com-
munity is necessary because of the evolution of current, ubiquitous ICT that are
magnified by present economic, social and political trends. There are grand chal-
lenges to be met, such as “transforming a health-care system from one designed to
treat acute disease to one that improves the lives of those with chronic illnesses” (ib.
p. 97). This is the kind of large project affected by ‘drift’ in which general plans
involving large numbers of actors need to be complemented by the appropriate
96 P. Depaoli et al.

“local” techniques suggested by Ciborra. The final question asked by the editors
refers to Ciborra’s invitation to abandon the restricted spaces of abstract models to
come in closer contact with the ‘lifeworld’: What are the consequences for orga-
nizing information systems that increasingly stimulate people’s curiosity and
creativity? (ib.) Researchers presently investigating the potentialities of ‘virtual’
(synthetic) worlds in organizational terms might be able to provide some answers to
that question [15].
History and Gestell The question of human-non human identity is one of the
topics discussed in a recent special section of the EJIS (January 2014). In fact, one
contribution specifically addresses cyborgian identity, i.e., the role of physical and
virtual bodies in social media [16], in which the way technology is conceived is
decisive: in virtual worlds attention should be turned “to the fluid and contingent
intermingling of humans and technologies” [17, p. 813]. This ‘intermingling’ holds
beyond virtual worlds and is shared by a wide range of social science studies. As
Shultze and Orlikowski underscore, practices are constitutive of social life in fluid
and emergent phenomena (‘performativity’). A view that differs significantly from
the traditional one of a reality that is composed of fixed and independent entities but
which chimes with that of Ciborra’s of an apparent reality made up of abstract
models and poorly explicative generalizations of the continuing ‘ordering’ of
resources (as described in Sect. 2, above). Indeed, the debate on issues close to
Ciborra’s sensibility as a researcher is ongoing.
Moreover, the fact that Ciborra draws on Heidegger and his Gestell concept
shows both his willingness to draw directly on the foundations of western thought
and to give historical depth to his analyses (ICTs do not come out of the blue: they
are born out of the development of ‘calculative thinking’). Ciborra even used a
phenomenological perspective when addressing key aspects in the expansion of
ICTs through the description of the Olivetti case (Labyrinths, Chap. 7): disconti-
nuities and surprises in strategy building and implementation can be fundamental
success factors. This was the method Ciborra used to incorporate IS history: to
propose an emblematic case to highlight relevant (and often overlooked) factors for
present action. Of course, an IS history can be built using other approaches and
methods, as shown in the papers of the two 2013 JIT special issues, in which the
editors point out that there are different ways of “doing IS history” [6]; methods
comprise case studies, interviews, and literature search. This kind of study includes
controversies and disputes and sheds light on two aspects: (i) there is no linear,
mechanistic development of IS; and (ii) there is no conclusively settled IS history
and heritage. Interestingly, the editors draw the reader’s attention to Michel
Foucault’s work and his findings of discontinuities in history. Through them we
learn how to deal with alterity, with the unexpected and the minute deviations
which Ciborra often underscored as key elements in large IS projects, as pointed out
in the previous section. The differences between Heidegger and Foucault should not
make our likening of Ciborra to Foucault seem surreptitious: the two philosophers
are linked by strong convergences. In fact, at the end of his essay Being and Power
Revisited, Hubert Dreyfus refers to the last works of both Heidegger and Foucault,
saying:
The IS Heritage and the Legacy of Ciborra 97

… when one is looking for marginal practices that could support resistance to a dominant
epoch of the understanding of being or a dominant regime of power…, one should think of
the marginal as what resists any unified style of being and power. One will seek to preserve
not new forms of being or power, but local things and individual selves [18, p. 49].

Implementation and drift In 2005 a special issue of JSIS (n. 2) looked at how
enterprise systems are affected by (and affect) individuals, groups and organiza-
tions. The results of one contribution’s case study [19] show that the interactions of
power structures (i.e. political and structural forces), the technology affordances,
and the intentions of management produce cycles of ‘control and drift’ during the
implementation of an ERP program. In other words, the intentions of designers and
managers produce both original development plans and successive revisions and
rescheduling (even the abandonment of certain plans) according to the emerging
limitations (or accommodations) of both technology (e.g., legacy systems) and
cultural or political settings (e.g., the evolution of power balances between senior
corporate managers versus national managers). The case study’s key findings
diverge substantially from the tenets of the studies based on critical success factors:
specific influences were not fixed but varied during implementation and forced
changes along the way. The authors conclude that technology is thus neither a
‘black box’ nor a mere supplement to the social structure-agency relationship, but
an agglomeration of affordances open to social interaction. The consequences for
practice are to give room to intuitive action and to improvise when situations are
new and destructured and to allow rational planning in well-established organiza-
tional processes. Indeed, Ciborra, cited by the authors, insisted on the concept of
drift and on the hiatus between the theory and practice of systems development and
use, and, of course, on the need to adopt tactics and learning-by-doing more than
formalized plans.
Strategizing and designing-in-action Ciborra uses the drifting phenomenon in
Krisis, the second chapter of Labyrinths, to support his critical stance on the issue
of strategic alignment between business organizations and ICT. Once again, the
scholar pointed out that organizations are complex relational and continuously
evolving systems interconnected with a ‘drifting’ information infrastructure. It is
therefore unlikely, as many business cases have shown, that management models
and methods (used for strategic alignment) have the capacity to deal with the real
world, built as they are on the natural sciences paradigm. These models take the
concepts of strategy and technology for granted instead of seeing them as prob-
lematic and adopting more realistic and practical approaches.
Ten years after the publication of Labyrinths, research moved away from
alignment in search of where strategic IT leadership is located in a modern cor-
poration. Let us see why. In June 2012 the Journal of Strategic Information Systems
(JSIS) celebrated its 20th anniversary with a reflection on the IS discipline and,
specifically, the link between IS and business strategies [20]. One of the contri-
butions [21], inspired by the study of the development of the Boeing 787 aircraft,
provided the opportunity to radically reconsider the role of strategic information
systems (SIS): “…during the early decades of the 21st century [IT investment has
shifted] toward an IT-enabled global network organization structure” (ib., p. 91).
98 P. Depaoli et al.

The fact that now IT is ‘everywhere’ and that IT leadership is ‘nowhere’ gives the
scenario a brand new complexion. The concept of business architecture comes to
the fore because ubiquitous IT enables and facilitates the establishment of strate-
gies, operations, and networks that cross traditional firm boundaries. Specifically:
the role of IT in corporations has shifted from supporting and being aligned with business
strategies to being an integral part of business strategies. As shown by the Boeing case,
strategic IT can’t be simply functionalized and positioned into traditional twentieth century
organization structures; IT now enables the emerging global network structures allowing
breakthrough products for breakthrough economics (ib., p. 101).

The swing toward a more praxis-oriented attitude in the IS discipline (as desired by
Ciborra) was the focus of another recent JSIS special issue, “Information systems
strategy as practice: micro strategy and strategizing for IS” [22], which makes a
detailed investigation of the subject of IS strategizing. The idea is to consider IS
strategizing as a practice based on a theoretical framework in which strategy praxis,
strategy practices, and strategy practitioners constitute its main elements [23]. This
literature originates in the managerial studies and adds another building block to the
debate between ‘strategy process’ and ‘strategy contents’. Strategy contents cluster
classical approaches such as the resource-based view of the firm [24] or the concept of
dynamic capabilities [25]. Strategy process focuses on how the steps to be followed
for strategic positioning and performance (strategy contents) should be put into
practice, taking account of the influence of internal politics, organizational culture,
and leadership styles [25]. Specifically, IS strategizing or micro-strategizing consists
of both deliberate and emergent patterns of actions where the role of organizational
sub-communities is considered particularly important [26]. Sub-communities, in fact,
are defined as groups of actors who share interests in particular domains of activity
contributing to the emergent strategy realization and collaborating with the wider
organizational community. In this context, the role of information systems can
become relevant because they can both mediate goal-oriented individuals and col-
laborative activities and lead, eventually, to practices generated by repeated patterns
in daily organizational work (technology-mediated practices).
Interestingly, Ciborra broached the notion of designing-in-action in Bricolage,
the third chapter of Labyrinths, to support the search for new strategic systems
[2, pp. 44–47]. This notion and practice is not too different from strategizing and is
still a valid contribution to the ‘strategy contents’ and ‘strategy process’ debate. In
fact, two main routes lead to innovation and competitive advantage: competence
cultivation (bricolage) and radical learning. Competence cultivation consists of
relying on local information and existing routines to gradually cope with new tasks
through learning-by-doing, incremental decision-making, and muddling through.
On the other hand, in radical learning both cognitive and organizational structures
are restructured by intentionally challenging and breaking down established rou-
tines, frames, and institutional arrangements. In both routes, the context is
restructured-in-action, design-in-action takes place and “new strategic information
and information systems will be generated, based on the unique, emerging world
The IS Heritage and the Legacy of Ciborra 99

view the designers and users are able to adopt” (ib. p. 46). So the competitive
advantage is actually triggered by the difficulties of the competitors to reproduce a
unique setting.
Development, sustainability, and democratization The strategic importance of
local knowledge and practice (which, as we have just seen, was underscored by
Ciborra especially for innovative organizations) was highlighted in the special issue
of MIS Quarterly on “IS in Developing countries” (2007). The guest editors
summarize the results of a group of studies (which they call ‘Local Adaptation and
Cultivation of IS’): “This body of literature opposes the naïve idea that global-
ization is synonymous with cultural homogeneity and reasserts the crucial impor-
tance of understanding and valuing locally meaningful practices” [27, p. 320]. Two
of the special issue papers concord with this line of thinking. Puri [28] examines a
case in India where local knowledge is used to complement scientific knowledge in
a locally designed GIS database. In a process of ‘participatory mapping’ the deep
understanding of the communities about resources (land, water, vegetation)
enabled, for example, the design and mapping of the traditional water-harvesting
structures and, consequently, of the appropriate location for developmental inter-
ventions. Silva and Hirschheim [29] investigate the development of a hospital
information system in Guatemala. The participatory (and decentralized) approach
adopted generated enthusiasm in the formerly skeptical hospital personnel and
persuaded them to share know-how and concerns. When elections changed the
administrative authorities, the project was brought to a halt and institutionalization
stopped since the new government decided to resort to packages provided by aid
agencies. On this issue, some participants told the researchers explicitly that the
administrative system developed was “unique and [couldn’t] be replaced by a
packaged program” (ib., p. 343). In addition to this awareness, one of the relevant
findings (and contributions to IS literature) mentioned by the authors suggests that
the development of such strategic information systems (SIS) “can affect not only
processes and mechanisms of production and control but also can affect values and
beliefs. This is highly relevant as most SIS literature concentrates on processes and
competition with little emphasis on values, beliefs, and emotions” (ib., p. 350). As
noted earlier, building on both uniqueness of practical expertise and soft aspects
(such as emotions) led Ciborra to consider these as critical factors.
The notion of sustainability does not concern only institutionalization, as in the
case of the Guatemala project where discontinuities in the country’s political
leadership prevented it, but is used to address a vast array of issues that concern the
environment and the role of IS. MIS Quarterly dedicated a special issue,
Information Systems and Environmental Sustainability, to these topics in December
2013. The guest editors’ introductory paper [30] highlights two aspects close to
Ciborra’s view. First, just as Ciborra called for a new vision and approach to
research and for a higher consideration of marginal practices (Bricolage), the guest
editors also call for innovation within the academic community to give voice to the
emerging field of IS: “..researchers must not only work on the actual design of
future IS but also establish the ‘in-field’ impact of such systems… When con-
ventional approaches fail, organizations often implement solutions that loosen the
100 P. Depaoli et al.

old shackles to enable the pursuit of new goals… We propose that MIS Quarterly
establish a new territory charged with promoting and publishing impactful green IS
research.” (ib., p. 1270). This quotation reminds the reader of one of Ciborra’s
suggestions (almost an oxymoron) to bolster incremental learning: “Establish sys-
tematic serendipity” [2, p. 51]. Second, Ciborra’s perception of the Internet was that
of a flexible infrastructure that emerged outside any strategic master plan and that
allows people to share knowledge in ways not even imagined by the textbooks
(ib. p. 13). Ten years later, Malhotra and co-authors see a powerful way for
advancing environmental sustainability [30, p. 1271] in the combination of the
‘Internet of people’ (which has changed the nature of communication between
people and organizations) with the ‘Internet of objects’ (ubiquitous networks
interconnected with sensors and sensitized objects).

5 Concluding Remarks

A fuller picture of how Ciborra’s work has been incorporated by the IS discipline
could have been drawn from the analysis of both the 693 citations of Labyrinths
(according to Google Scholar) and the complete set of special issues produced by
the AIS basket journals. Yet, the results of the preceding section have provided
sufficient evidence for the formulation of solid preliminary conclusions concerning
the importance of a process-oriented worldview in IS and organization studies.
First, the technology and, specifically, the IS debate is far from being resolved:
emerging grand challenges (e.g. sustainability) need to be addressed and scholars
are anchoring their work to increasingly explicit (and varied) ontological and
epistemological roots: Ciborra’s later work went the whole mile as he drew on
phenomenology to develop his research tenets on a range of issues. Second, the
Husserlian life-world seems to have become an inevitable trail for IS researchers to
follow, given that they now rank moods, feelings, and emotions as key factors in
gaining insights into the encounter (the intermingling, according to sociomaterial
literature) of human and non-human entities. Third, local practical expertise—in
which Ciborra was greatly interested thanks to its generative capability of
innovation—is now considered a key determinant not only for IS programs in
developing countries but also for transforming strategic IS ‘alignment’ into IS
‘strategizing’; organizational sub-communities of actors produce technology med-
iated practices that are an integral part of the strategizing process. Fourth,
designing-in-action and bricolage are seen increasingly as the best ways to respond
to the drifting of projects from original plans. In fact, IT shared leadership leverages
technology affordances to enable a decentralized negotiation between the political
and structural forces and the management objectives.
Ten years ago Ciborra’s original thinking led him to build his research according
to an ontologically and epistemologically coherent vision. That vision was some-
what undervalued by his mainstream contemporaries but not by the IS researchers
of today, to whom his key findings are still relevant.
The IS Heritage and the Legacy of Ciborra 101

References

1. Thompson, M.: Ontological shift or ontological drift? Reality claims, epistemological


frameworks, and theory generation in organization studies. Acad. Manag. Rev. 36, 753–773
(2011)
2. Ciborra, C.: The Labyrinths of Information: Challenging the Wisdom of Systems. Oxford
University Press, Oxford (2002)
3. Orlikowski, W.J., Baroudi, J.J.: Studying information technology in organizations: research
approaches and assumptions. Inf. Syst. Res. 2(1), 1–28 (1991)
4. Chen, W., Hirschheim, R.: A paradigmatic and methodological examination of information
systems research from 1991 to 2001. Inf. Syst. J. 14(3), 197–235 (2004)
5. Paucar-Caceres, A., Wright, G.: Contemporary discourses in Information Systems Research:
Methodological inclusiveness in a sample of Information Systems Journals. Int. J. Inf.
Manage. 31(6), 593–598 (2011)
6. Bryant, A., Black, A., Land, F., Porra, J.: Information Systems history: What is history? What
is IS history? What IS history? and why even bother with history? J. Inf. Technol. 28(1), 1–17
(2013)
7. Crotty, M.: The Foundations of Social Research: Meaning and Perspective in the Research
Process, p. 176. Sage, London (1998)
8. Simon, H.A.: Administrative Behavior: A Study of Decision-Making Processes in
Administrative Organization, p. 327. Free Press, New York (1976)
9. Williamson, O.E.: The Economic Institutions of Capitalism: Firms, Markets, Relational
Contracting, vol. 54, no. 171, pp. xiv, 450. Free Press, New York (1985)
10. Argyris, C.: Reasoning, Learning, and Action: Individual and Organizational, vol. 4, no. 16.
San Francisco: Jossey-Bass, (1982)
11. Ciborra, C.: The platform organization: Recombining strategies, structures, and surprises.
Organ. Sci. 7(2), 103–118 (1996)
12. Latour, B.: Science in action: How to follow scientists and engineers through society. Harvard
University Press (1987)
13. Heidegger, M.: The question concerning technology. In: Farrell Krell, D. (ed.) Martin
Hidegger Basic Writings, pp. 311–341. Routledge, London (1993)
14. Sawyer, S., Winter, S.J.: Special issue on futures for research on information
systems: Prometheus un-bound? J. Inf. Technol. 26(2), 95–98 (2011)
15. Orlikowski, W.J.: The sociomateriality of organisational life: considering technology in
management research. Cambridge J. Econ. 34(1), 125–141 (2010)
16. Schultze, U.: Performing embodied identity in virtual worlds. Eur. J. Inf. Syst. 23(1), 84–95
(2014)
17. Schultze, U., Orlikowski, W.J.: Research commentary—virtual worlds: a performative
perspective on globally distributed, immersive work. Inf. Syst. Res. 21(4), 810–821 (2010)
18. Dreyfus, H.L.: Being and power revisited. In: Milchman, A., Rosenberg, A., (eds.) Foucault
and Heidegger–Critical Encounters, pp. 30–54. University of Minnesota Press (2003)
19. Nandhakumar, J., Rossi, M., Talvinen, J.: The dynamics of contextual forces of ERP
implementation. J. Strateg. Inf. Syst. 14(2), 221–242 (2005)
20. Galliers, R.D., Jarvenpaa, S.L., Chan, Y.E., Lyytinen, K.: Strategic information systems:
reflections and prospectives. J. Strateg. Inf. Syst. 21(2), 85–90 (2012)
21. Nolan, R.L.: Ubiquitous IT: the case of the boeing 787 and implications for strategic IT
research. J. Strateg. Inf. Syst. 21(2), 91–102 (2012)
22. Peppard J., Galliers R.D., Thorogood A.: Information systems strategy as practice: Micro
strategy and strategizing for IS. J. Strategic Inf. Sys. 23(1), 1–10 (2014)
23. Jarzabkowski, P., Balogun, J., Seidl, D.: Strategizing: the challenges of a practice perspective.
Hum. relations 60(1), 5–27 (2007)
24. Barney, J.: Firm resources and sustained competitive advantage. J. Manage. 17(1), 99–120
(1991)
102 P. Depaoli et al.

25. Teece, D.J., Pisano, G., Shuen, A.: Dynamic capabilities and strategic management. Strateg.
Manag. J. 18(7), 509–533 (2008)
26. Henfridsson, O., Lind, M.: Information systems strategizing, organizational sub-communities,
and the emergence of a sustainability strategy. J. Strateg. Inf. Syst. 23(1), 11–28 (2014)
27. Walsham, G., Robey, D., Sahay, S.: Foreword: Special issue on information systems in
developing countries. MIS Q. 31(2), 317–326 (2007)
28. Puri, S.K.: Integrating scientific with indigenous knowledge: constructing knowledge alliances
for land management in india. MIS Q. 31(2), 355–379 (2007)
29. Silva, L., Hirschheim, R.: Fighting against windmills: strategic information systems and
organizational deep structures. MIS Q. 31(2), 327–354 (2007)
30. Malhotra, A., Melville, N.P., Watson, R.T.: Spurring impactful research on information
systems for environmental sustainability. MIS Q. 37(4), 1265–1274 (2013)
Collective Awareness Platform
for Sustainability and Social Innovation
(CAPS)
Understanding Them and Analysing Their
Impacts

Antonella Passani, Francesca Spagnoli, Francesco Bellini,


Alessandra Prampolini and Katja Firus

Abstract The paper describes the Collective Awareness Platform for Sustainability
and Social Innovation domain (CAPS) by using an “inside” perspective, as it is
based on the research work of a CAPS project entitled IA4SI—Impact Assessment
for Social Innovation. The paper first defines Digital Social Innovation as the
technological enabled version of Social Innovation and describes CAPS projects
consequently. Then, it presents the framework of the quanti-qualitative methodol-
ogy developed by the IASI project for analysing the impact of CAPS projects. It
considers four main areas of impact: social, economic, political and environmental.
Each aspect is then articulated in several sub-categories required in order to map a
multi-dimensional and internally diversified domain such as CAPS.

 
Keywords CAPS Digital social innovation Social innovation Socio-economic 
 
impact Political and environmental impact assessment European projects 

EU-funded research Methodology

A. Passani (&)  A. Prampolini  K. Firus


T6 Ecosystems S.r.l, Rome, Italy
e-mail: a.passani@t-6.it
A. Prampolini
e-mail: a.prampolini@t-6.it
K. Firus
e-mail: k.firus@t-6.it
F. Spagnoli  F. Bellini
Eurokleis, Rome, Italy
e-mail: francesca.spagnoli@eurokleis.com
F. Bellini
e-mail: frnacesco.bellini@eurokleis.com

© Springer International Publishing Switzerland 2016 103


C. Rossignoli et al. (eds.), Organizational Innovation and Change,
Lecture Notes in Information Systems and Organisation 13,
DOI 10.1007/978-3-319-22921-8_9
104 A. Passani et al.

1 Collective Awareness Platforms for Sustainability


and Social Innovation (CAPS)

The acronym CAPS, stands for Collective Awareness Platforms for Sustainability
and Social Innovation. The European Commission (EC) used this acronym for the
first time in 2012, in the context of the Seventh Framework Programme of research.
It served for identifying a new group of research projects and, to a certain extent, a
new research area.
The European Commission defines CAPS as follows:
“The Collective Awareness Platforms for Sustainability and Social Innovation
(CAPS) are ICT systems leveraging the emerging ‘network effect’ by combining
open online social media, distributed knowledge creation and data from real
environments (‘Internet of Things’) in order to create awareness of problems and
possible solutions requesting collective efforts, enabling new forms of social
innovation.
The Collective Awareness Platforms are expected to support environmentally
aware, grassroots processes and practices to share knowledge, to achieve changes in
lifestyle, production and consumption patterns, and to set up more participatory
democratic processes. Although there is consensus about the global span of the
sustainability problems that are affecting our current society, including the eco-
nomic models and the environment, there is little awareness of the role that each
and every one of us can play to ease such problems, in a grassroots manner”.1
With a first, dedicated call (Call10 of FP7—objective 5.5 of Work Programme
2013), the European Commission invested 19 million of Euros into 12 projects and
500.000 Euros for a Study on “Social Innovation in the Digital Agenda”. Other
three projects—funded under other programmes—were added to this domain as
well, because their research activity is very relevant for CAPS. As a result, the
programme can be said to consist of 15 on-going projects in this area.
These are seven Research Projects for Grass Roots Experiments and Pilots, four
support actions (including IA4SI) and one project dedicated to the management of a
seed fund for social innovation activities. The CAPS domain is included in Horizon
2020 programme with an investment of €36 millions Euro for the periods 2014–
2015. As it will be described in the next chapters, the expectation is that the IA4SI
methodology can be used for future CAPS projects and beyond, for Digital Social
Innovation projects in general.
Collaborative Awareness Platforms can be seen as ICT-supported collaborations
of human and non-human actors which enable and facilitate the production, sharing
and sense-making of information gathered through citizen engagement and through
sensors and the like [1].

1
http://ec.europa.eu/digital-agenda/en/collective-awareness-platforms-sustainability-and-social-
innovation.
Collective Awareness Platform for Sustainability … 105

2 Social Innovation (SI) and Digital Social


Innovation (DSI)

The term social innovation is composed of two words: “Social” and “Innovation”.
Both terms are largely used in everyday language and are often taken for granted
when, in fact, they are difficult to define in a non-tautological way. It is not trivial to
question the very nature of society [8] or to define the boundaries between what is
social and what is, for example, economical or cultural. Similarly, the literature on
innovation’s political, economical and technological aspects is broad and many
definitions of innovation are available [15]. The first step to accurately define social
innovation is to recognise the seeming ambiguity of the term: its definition may
vary according to the definitions attributed to the concepts “social” and “innova-
tion”. It is useful, therefore, to consider the epistemologies behind the two terms in
the various definitions of social innovation that are currently available, so as to try
to circumscribe the realm of social innovation, and to understand its boundaries.
Moreover, social innovation as a field of study, is rather interdisciplinary, hence,
definitions and understandings are likely influenced by the various authors’
disciplines.
As a starting point for the examination of the term, is the definition proposed by
Murray et al. [13]. The authors define social innovation as “new products, services
or methods that tackle pressing and emerging social issues and, at the same time,
transform social interactions promoting new collaboration and relationships”. In
this definition, the term “social” is used in two ways: it characterises the issues to be
solved (such as adaptation to climate change and the effects of aging population on
society) and the methods used for solving such issues, and which imply a modi-
fication (of some sort) in social relationships. In this definition, social innovation
represents both product and process innovation. It is said to generate a new
product/service by changing, at the same time, the way in which this
product/service is produced. It benefits society ‘twice’, that is, by proposing a
solution to a specific problem and by offering new social links and collaboration
opportunities. The authors do not recognise a specific social category as being the
protagonist of social innovation; the innovator can be a social entrepreneur, a
self-organised local community, an association, a company or a government.
Examples of social innovation can include co-housing, the Grameen bank,
eco-towns and car sharing. In terms of process innovation, the understanding of
social innovation is associated with terms such as participation, engagement,
empowerment, co-design, bottom-up, grassroots initiatives and so forth.
The concept can even be traced back further, dating back to the beginning of
nineteenth century. In his paper, Godin [6] explains that the term social innovation
emerged after the French revolution and, at that time, had both a positive and a
negative connotation. The negative connotation saw social innovation as
106 A. Passani et al.

synonymous with radical socialism represented by thinkers such as Fourier,


St-Simon, Proudhon, and called for a drastic and fundamental change of social
order. A more positive connotation linked social innovation to social reforms and
social justice.
Over time, the concept of social innovation became to be less frequently used
and the term “innovation” was more commonly attributed to technology. Social
innovation re-entered theoretical writings in the 1960–1970s, and only in the last
10 years or so, it has attracted a consistent interest among scholars. Here, social
innovation re-emerged as a term that contrasted technological innovation, as a
so-called counter-concept [6]. In this view, social innovation indicates a call for
action, for more attention to be attributed to the social aspects of innovation, which
have been perceived as neglected by the hegemonic role of technology. For this
reason, it is particularly interesting to see Social Innovation as a key concept in the
European Digital Agenda [5].
Given the historical background of the concept, it is worth investigating the
‘value connotation’ that the term seems to carry nowadays; it is evident that it is not
a neutral one. The concept of innovation does not seem to be problematized in the
context of social innovation. Innovation is seen in a positive way and tends to be
used as synonymous with “improvement” and “progress” when, to the contrary,
there are also innovations that have negative effects at economical, social, political
or environmental levels. In this sense, social innovations only refer to positive
innovation that, as in the definition proposed by Philip et al. [18:36], is meant to be
“more effective, efficient, sustainable, or just than existing solutions.” This defini-
tion is central for IA4SI, it will inform some of its complex indices as it traces a
clear pattern in terms of expected impacts.
Today, not much systematic investigation is readily available on social inno-
vation, and digital social innovation in particular; the definition is still problematic
and research on models, methodologies and tools for stimulating, supporting and
understanding social innovations are on-going [12–14, 19]. Some initial insights are
offered by Moulaert et al. [11] who analysed how different disciplines have con-
sidered specific aspects of social innovation, also BEPA [2] categorizes social
innovations according to their outputs and Ilie and During [7], following a
post-structuralist approach, study social innovation by following three discourses
around the term, that is, governmental, entrepreneurial and academic. Most of the
work conducted so far focuses on defining social innovation, analysing the pro-
cesses by which it is emerging and flourishing, and map experiences of social
innovation world-wide. The attention is devoted to the description of concrete
experiences for abstracting models for social innovation replicability and
scaling-up. Little has been done so far to analyse the results of social innovation
initiatives; to evaluate the benefits produced by public-funded programmes and to
compare the effects of social innovation projects with previous and alternative
models of tackling social issues.
Collective Awareness Platform for Sustainability … 107

3 CAPS, Social Innovation and Digital Social Innovation


as Research Field

The concept of social innovation is still nascent and the different forms it can take
have not yet generated a robust way of analysing and measuring its impacts [3, 18].
We can use the lessons learned from this “sector” only in a limited way as IA4SI
is dealing with international, pilot-based projects and not to entrepreneurship or
public driven initiatives [16]. Projects are here interpreted as temporary organiza-
tions ‘to which resources are assigned to undertake a unique, novel and transient
endeavour managing the inherent uncertainty and need for integration in order to
deliver beneficial objectives of change’ [20:7, 9]. A related topic is the localisation
of impacts, especially relevant for digital social innovations, which are expected to
produce benefits in different territorial contexts. It is relevant to look at if and to
what extent, the online tools for social innovation enable transformation at a local
community level and if so, how this happens [20].
Another focal point of investigation is related to the interdisciplinary nature of
social innovation and what it can mean, or achieve in terms of collaboration among
different stakeholders. Social innovation initiatives can serve as a testing ground for
new collaborative processes and for instruments fostering such collaborations.
IA4SI project wishes to contribute to the debate in the field by analysing the first
15 CAPS projects, their objectives, outputs and impacts.
Concluding this section, we can operationalize CAPS projects by interpreting
them as a sub-category of the wider concept of Digital Social Innovation. They will
serve as the main target of drawing out the IA4SI methodology.
CAPS projects are ICT-enabled pilot initiatives, which address pressing social
issues and sustainability issues by promoting the active participation of European
citizens and/or rely on their capability of proving and sharing information. CAPS
projects are digital social innovation initiatives and as such are expected to propose
innovative solutions which should be more efficient, effective, just and sustainable
that available ones. CAPS initiatives are multidisciplinary in nature and most of
them have a relevant research aspect.
Considering now: the topics covered by on-going CAPS projects, the topic
suggested by the EU in the first call dedicated to CAPS, the categories used by the
Digital Social Innovation projects2 for categorising European initiatives in the field,
and the categorisation of social innovation projects proposed by the Tepsie project
[3]; it is possible to say that CAPS projects focus (or could focus in the future) on
the following topics:
• Energy and environment
• Social inclusion
• Participation and democracy
• Economy: production and consumption

2
www.digialsocial.eu.
108 A. Passani et al.

• Knowledge, science and information


• Rights
• Finance
• Culture and art
• Health and wellbeing
• Community creation, renewal and reinforcement
• Work and employment
• Neighbourhood regeneration and housing
Each of the on-going CAPS project works on one or more of these topics. At the
present stage, none of them is active in the domain of “Neighbourhood regeneration
and housing” yet which is central in the social innovation debates. Also, the “work
and employment” and “culture and art” topics seem not be represented in the
current CAPS activities, but considering the future application of the methodology,
IA4SI considered sensible to consider also these topics.
At the present stage it is not possible, neither useful, to group the 15 CAPS
project in sub-groups or to compare them, as they are all very different one each
other in terms of research focus, social issue addressed, community engaged in the
project, ICT instruments under development and so forth. Describing each of them
in its main feature is out of the scope of this paper3: the description of the CAPS
domain was here introduced in order to better frame the IA4SI methodology that
will be described in the next paragraphs.

3.1 IA4SI Impact Assessment Methodological Framework

IA4SI methodology described in this section has been elaborated starting from an
extensive literature review on Social Innovation, Digital Social Innovation, impact
assessment methods for these domains and conceptually close domain such as the
third sector, development-related investments and online communities assessments.
IA4SI build on previous European projects in the field of impact assessment
such as SEQUOIA,4 ERINA+5 and MAXICULTURE.6 Those previous projects
offered important lessons learned that have been incorporate in the IA4SI
methodology.

3
A description of each of the CAPS projects can be found at http://caps2020.eu/about-caps/caps-
ict-workprogramme-2013/ and at https://ec.europa.eu/digital-agenda/en/caps-projects. Most of the
projects started in October 2013 and will least 24 or 30 months.
4
For an overview of the SEQUOIA methodology and results see [16]. The compete methodology
is described in Monacciani, Navarra, Passani, Bellini, 2011 and a practical approach to its usage is
described in [10].
5
The ERINA+ Methodology and related tools is described in Passani and others (2013).
6
The MAXICULTURE methodology is described in Passani, Bellini, Spagnoli, Satolli, Debicki,
Ioannidis, Crombie, 2014.
Collective Awareness Platform for Sustainability … 109

IA4SI methodology will be applied to on-going CAPS projects. Therefore, it


focuses on expected impacts and will describe, coherently with the definition of
impact provided by the International Association for Impact Assessment (IAIA),
“the difference between what would happen with the action and what would happen
without it7”. Nevertheless, it is important to note that the IA4SI methodology can be
used also when these projects will be completed, so that, in synthesis, the meth-
odology can be useful in the on-going project phase and in their ex-post phases.
Running an impact assessment means answering the question “what is the dif-
ference a CAPS project make at socio-economic level, at environmental level and at
political level?” This will be done by mapping the inputs, outputs, outcomes and the
expected impacts of CAPS projects. In other words, this will be done by applying
the value chain approach, which is also known as logic model, or logic chain [4].
The IA4SI methodology finds its fundaments in the Cost-Benefit Analysis, in the
Multicriteria Analysis and in the Social Media ROI. For analysing the changes in
CAPS users way of thinking and behaviours Stated Preference methods and
Revealed Preference methods will be used while, the environmental impact
assessment will be informed by Ecological Footprint methodology and Global
Reporting initiatives approach. It is a quali-quantitative multi-stakeholders meth-
odology, which engages projects coordinators, their partners, project users and
European citizens. The assessment will be conducted by using 8 synthetic indices: 4
of them are related to specific areas of impact and related sub categories and are
visualised in the figure that follows. These indices can be called vertical indices.
Each vertical indices is composed of other indices each corresponding to a specific
subcategory; for example the synthetic index Social impact is composed of 6
indices, one for each subcategory such as Impact on “Community building and
empowerment”, “Impact on information”, etc.
Besides the four vertical indices, the IA4SI methodology incudes 4 transversal
indices that provide information about the process followed by the CAPS projects
in determining their impacts. In other words, the transversal indices are related to
the attributes of the innovation developed. The four indices are: efficiency, effec-
tiveness, sustainability and fairness. These four indices are inspired by Philip et al.
[18:36], that describe social innovation as a solution which is meant to be more
“more effective, efficient, sustainable, or just than existing solutions” (Fig. 1).
Social impact index considers, first of all, the capability of CAPS projects of
creating and/or enlarging/empower communities and a special attention is dedicated
in understanding the links and interdependencies between online communities and
local communities. Access to information and new instruments for navigating,
interpreting and critically evaluate the quality of information are considered key
aspects in the development of new solutions for social needs. IA4SI will then
investigate CAPS capability of influencing users and citizens way of thinking and
act by investigating the changes experienced by CAPS users. Under the social

7
Available at http://www.iaia.org/publicdocuments/special-publications/What.%20is%20IA_web.
pdf.
110 A. Passani et al.

Fig. 1 IA4SI vertical indices (Source [17])

impacts it will also investigate the project capability to create new job positions and
to foster employment in general as well as the possible impact in terms of training
and human capital development. The impact of CAPS on academia, their scientific
impact through publications and IPRs development, will be also considered.
Social impact index is articulated in the following 6 sub-categories:
• Impact on community building and empowerment
• Impact on information
• Impact on ways of thinking and behaviours
• Impact on education and human capital
• Impact on science and academia
• Impact on employment
By aggregating indicators that are included in different dimensions and
sub-dimensions, it will be also possible to investigate CAPS impact on Social
Capital and on Social Inclusion: tow dimension that the IA4SI team consider as
extremely relevant in this context.
Under the Political impact dimension the methodology will evaluate CAPS
capability of fostering users participation to civic society organisations, of getting
active for their community and to develop new forms of collaboration. Similarly, it
will consider the impact on users political participation and will evaluate project
capability of influencing policy makers and institutions.
Collective Awareness Platform for Sustainability … 111

The index is articulated in the following sub-categories:


• Impact on civic and political participation
• Impact on policies and institutions
With reference to economic impact, IA4SI methodology focuses on microeco-
nomic impacts especially in terms of positive economic results for each partners of
the CAPS project consortia, end-users and general stakeholders of the projects.
Economic impacts, has been articulated in 3 subcategories:
• Users Economic Empowerment
• The Economic Value Generated by the project
• Impact on ICT driven innovation
Considering now environmental impacts, the Digital Agenda for Europe 2020 it
is explicitly stated that CAPS should provide “societally, environmentally and
economically sustainable approaches and solutions to tackle societal challenges”,
and among the examples of CAPS targets we find “comparing individual lifestyles
against some ecological/environmental benchmark” and “promoting sustainable
and collaborative consumption, as a basis for an effective Low-Carbon economy”.
CAPS’ impacts on the environment are bound to be quite similar in their nature
to the ones of social media and computer mediated social networks, hence to show
their effects within two main dimensions:
• the environmental impact of the projects themselves and
• the impact on users environmental behaviour.
IA4SI has identified four areas of environmental impact relevant for CAPS
projects:
• Greenhouse gases emissions (including energy efficiency and production of
energy from renewable sources)
• Air Pollution related to transport
• Waste
• Sustainable consumption of goods and services
• Biodiversity

3.2 The IA4SI Impact Assessment Process

The analysis of CAPS projects impacts will take advantage of tow main online tools
developed by the IA4SI project. These tools are: the “Self-assessment toolkit”
(SAT) and the “User Data Gathering Interphase” (UDGI). The first one is dedicated
to CAPS projects coordinators and partners and the second one to CAPS users.
CAPS projects coordinators and partners, by entering information in the SAT will
follow a six-steps process which will lead them to the assessment results.
112 A. Passani et al.

1. First of all, CAPS representatives will describe the inputs of their project
including the budget, the human resources available at project level, the
pre-existing technological and non-technological elements the projects builds
on, etc. As part of this step, project representatives will describe their zero
scenario and the social issues they are addressing.
2. Secondly, they will select their stakeholders and end-users in this way
describing “who” will benefit from the project outputs
3. Thirdly, they will describe their outputs: technological and non-technological
ones such as publications, licences, patents, etc.
4. Then they will select the impact dimensions that are more relevant for them. The
IA4SI methodology is modular so that each project can personalise it. As an
example, a project can select impact on employment and impact on information
as relevant and exclude impact on education and human capital because its
outputs and its activities are not leading to this kind of impacts.
5. At this point the SAT will show all the questions related to the impact
dimensions selected by the project representatives. The data requested are both
qualitative and quantitative.
6. The data inserted by CAPS representatives will be elaborated in real time by the
SAT that will provide them an impact assessment report. In a graphic, easy-to-
understand way, project representatives will be able to visualise their impacts by
comparing their performance with a set of benchmarks. Each project will be able to
see the score obtained on the 8 IA4SI complex indices (social, economic, envi-
ronmental, political impacts, efficiency, effectiveness, innovativeness and fair-
ness) and to explore the results achieved on the composing indicators.
In parallel, CAPS users will be invited to fill in the UDGI, which looks like an
online questionnaire and investigates the CAPS benefits from the point of view of
their users. The information gathered by the UDGI will appear in the SAT: each
CAPS project will be able to see the opinions of its users in an aggregated,
anonymous way and it will be possible to compare the results of their
self-assessment with the point of view of their users.
IA4SI team will use all the gathered data for developing two impact assessment
reports: one will include the assessment of each CAPS project and one will analyse
the data at aggregated, domain level. Besides this, a set of best practice will be
identified and further analysed using a case-study approach.
As mentioned, each complex indicator is composed of several indicators and the
data have different measurement units such as monetary value, years, yes/no, relative
values, 1–6 points Likert scale, etc. Clearly, the data need to be treated before their
aggregation into indices. Indeed the final goal of the IA4SI methodology is to
synthesize the vertical (per category or subcategory) or transversal impacts in indices
expressed in a 0–1000 scale in order to make projects performances comparable.
Before doing so the indicators composing the complex indices will be normal-
ized using a min-max approach (the normalization is performed by subtracting the
minimum value and dividing by the range of the indicator values). If extreme
values/or outliers could distort the transformed indicator, statistical techniques can
Collective Awareness Platform for Sustainability … 113

neutralise these effects. After having normalised the indicators in a 0–1000 scale it
is possible to calculate the aggregated index for each impact subcategory simply by
using the arithmetic mean of that indicators. Recursively, in this same way, it is
possible to pass subcategory impact indices to impact area indices. The possibility
to attribute to the various indicators and indices different weight is under analysis;
this topic will be discussed with CAPS projects representatives together with the
benchmark system that is under development at the time of writing.

4 Conclusions

The methodology presented in this document constitutes a first draft that will be
tested by CAPS projects from November 2014 to the first months of 2015; the
testing coincides with the first data-gathering phase. The analysis at projects level
and at CAPS domain level will be available starting from August 2015. In the
context of this paper it was not possible to describe the indicators and variables that
constitute each index, neither it was possible to show the formulas that will be
applied and the analysis and visualisation that will be offered by the IA4SI toolkit.
All these elements are going to be the focus of future papers; in the meantime, more
information about the IA4SI project, its methodology and its development are
available at www.IA4SI.eu where a full description of the methodology is available
in the download section [17].

Acknowledgment This work was supported by the European Commission’s Framework


Programme 7 [ICT-61125]. The authors are grateful to Shenja Van Der Graaf, project coordinator
of IA4SI from iMinds, Wim Vannobberghen, Lizzy Bleumers, Katriina Kilpi also from iMinds and
Marina Klitsi, from ATC a IA4SI partner for their support and constructive collaboration
throughout the development of the methodology and the execution of the project.

References

1. Arniani, M., Badii, A., De Liddo, A., Georgi, S., Passani, A., Piccolo, L.S.G., Teli, M.:
Collective Awareness Platform for Sustainability and Social Innovation: An Introduction
(2014)
2. BEPA: Empowering people, driving change. Social innovation in the European Union,
Luxemburg: Publication Office of the European Union (2011)
3. Bund, W., Hubrich, K., Schmitz, B., Mildenberger, G., Krlev, G.: Blueprint of social
innovation metrics—contributions to an understanding of opportunities and challenges of
social innovation measurement (2013). Deliverable of the Project Tepsie, EU 7FP. http://
www.tepsie.eu/index.php/publications
4. Epstein, M.J., McFarlan, F.W.: Measuring the efficiency and effectiveness of a nonprofit’s
performance, Strategic Finance, 93/4, pp. 27–34. http://www.imanet.org/PDFs/Public/SF/
2011_10/10_2011_epstein.pdf (2011). Accessed 15 March 2014
5. European Commission: Communication from the commission to the european Parliament, the
council, the European economic and social Committee and the committee of the regions
114 A. Passani et al.

Europe 2020 flagship initiative Innovation union. http://ec.europa.eu/research/innovation-


union/pdf/innovation-union-communication_en.pdf (2010). Accessed 11 September 2013
6. Godin, B.: Social Innovation Utopias of Innovation from c.1830 to the Present, Working paper
No. 11, Project on the Intellectual History Of Innovation, Montréal: INRS (2012)
7. Ilie, E.G., During, R.: An Analysis of social innovation discourses in Europe. Concepts and
Strategies of Social Innovation. Alterra: This is how we do it! (2012)
8. Latour, B.: Reassembling the social. An introduction to actor-network-theory, Oxford
University Press, Oxford (2005)
9. Meyerson, D., Weick, K.E., Kramer, R.M.: Swift trust and temporary groups. In: Kramer, R.
M., Tyler, T.R. (eds.) Trust in Organizations: Frontiers of Theory and Research, pp. 166–195.
Sage, Thousand Oaks (1996)
10. Monacciani, F., Passani, A., Bellini, F., Debicki, M.: Deliverable D3.3b—SEQUOIA Self
Assessment How-To Guide (2012). A deliverable of the SEQUOIA project. http://www.lse.ac.
uk/media@lse/research/SEQUOIA/SEQUOIA_D3.3b_final_modif_md_v2.pdf
11. Moulaert, F., Martinelli, F., Swyngedouw, E., Gonzalez, S.: Towards alternative model(s) of
local innovation. Urban Studies. SAGE J. 42(11), 1969–1990 (2005). http://usj.sagepub.com/
cgi/doi/10.1080/00420980500279893. Accessed 5 October 2013
12. Mulgan, G., Tucker, S., Ali, R., Sanders, B.: Social Innovation What it is, why it matters and
how it can be accelerate. University of Oxford, Skoll Centre for Social Entrepreneurship,
Oxford (2007)
13. Murray, R., Caulier-Grice, J., Mulgar, G.: The Open Book of Social Innovation. Young
Foundation/NESTA, London (2010)
14. Murray R., Mulgan G., Caulier-grice J.: How to innovate: the tools for social innovation,
NESTA & The Young Foundation. http://www.nesta.org.uk/sites/default/files/the_open_
book_of_social_innovation.pdf (2010b)
15. OECD, Eurostat: Oslo Manual. Guidelines for Collecting and Interpreting Innovation Data
(2005)
16. Passani, A., Monacciani, F., Van Der Graaf, S., Spagnoli, F., Bellini, F., Debicki, M., Dini, P.:
SEQUOIA: A methodology for the socio-economic impact assessment of
Software-as-a-Service and Internet of Services research projects. Res. Eval. 2014(23), 133–
149 (2014)
17. Passani A., Spagnoli, F., Prampolini, A., Firus, K., Van Der Graaf, S., Vanobberghen, W.:
IA4SI Methodological framework—First version. A deliverable of the project IA4SI—Impact
assessment for Social Innovation, European Commission—7th Framework Programme
(2014b)
18. Phills J.A., Deiglmeier, K., Miller, D.T.: Rediscovering social innovation, Stanford Social
Innovation Review. Fall 2008, Leland Stanford Jr. University (2008)
19. The Young Foundation: The Young Foundation and the Web. Digital Social Innovation,
working paper (2010)
20. Turner, J., Muller, R.: On the Nature of the Project as a Temporary Organization. Int. J. Project
Manage. 21, 1–8 (2003)
Business Model in the IS Discipline:
A Review and Synthesis of the Literature

G. Pozzi, F. Pigni, C. Vitari, G. Buonanno and E. Raguseo

Abstract Although the Business Model (BM) concept provides a convenient unit
of analysis in the business practices, BM research in the Information Systems
(IS) field emphasizes blurriness and divergences in its structure. With this paper we
provide a clarification of the BM concept and update Al-debei and Avison [1]
analysis on the BM literature. Using a structured methodology, we review the titles
and the abstracts of 108 articles from IS literature and examine a significant subset
of 49 articles. Our work contributes first, to formalize the concept of BM as
instanced in IS domain and organizes BM studies around two different frameworks.
Second, it highlights the BM research streams and their current states of the art.
Last, it discusses the current limitations of the BM studies and offers the basis for
future research.

Keywords Business model  Literature review  Information systems

G. Pozzi (&)  G. Buonanno


LIUC—Università Cattaneo, Castellanza, Italy
e-mail: gpozzi@liuc.it
G. Buonanno
e-mail: buonanno@liuc.it
F. Pigni  C. Vitari  E. Raguseo
Grenoble Ecole de Management, Grenoble, France
e-mail: federico.pigni@grenoble-em.com
C. Vitari
e-mail: claudio.vitari@grenoble-em.com
E. Raguseo
e-mail: elisabetta.raguseo@grenoble-em.com

© Springer International Publishing Switzerland 2016 115


C. Rossignoli et al. (eds.), Organizational Innovation and Change,
Lecture Notes in Information Systems and Organisation 13,
DOI 10.1007/978-3-319-22921-8_10
116 G. Pozzi et al.

1 Introduction

A BM represents the core business concept of a company; it depicts the logic of the
company and it outlines how a company creates and captures value [1–4]. The
concept of BM has established itself during the Internet boom, where traditional
firms transformed themselves into digital ones with the rise of the commercial use
of modern information communication technologies (ICT). Researchers agree that
the interest in the BM concept in the IS field has grown ever since. Although the
BM concept is considered applicable for all business in any sector [2], the majority
of research into BM in the IS field is concerned about software industry and
application service and infrastructure providers [5–10], online news, advertising
and social media BM [11, 12].
The BM concept appears to provide a convenient unit of analysis in business
practice; therefore in the last years we observed an increasing number of publica-
tions concerning it. The particular origins of the BM concept from diverse disci-
plines such as eBusiness and eCommerce, IS, strategy, business management,
economics, and technology [13, 14] contribute to the blurriness of the structure of
the BM research. It is interesting to notice that the BM concept and its associated
research is still considered young and new, although its appearance in scholarly
journals for almost 20 years. Therefore, this paper is motivated by the need for a
clarification of the BM concept in the IS domain. With this paper we answer to the
following research question: “What is the current understanding of the BM con-
cept?”. Our work update Al-debei and Avison [1] literature review on BM litera-
ture, where they clarify the BM concept, present a comprehensive conceptual
framework, and illustrate and discuss the BM compositional facets providing a
common and leveraged understanding of the concept. The authors [1] define the
BM as “an abstract representation of an organization, of all core interrelated
architectural, co-operational and financial arrangements designed and developed
by an organization presently and in the future, as well all core products and/or
services the organization offers, or will offer, based on these arrangements that are
needed to achieve its strategic goals and objectives” [1].
The paper is structured as follows. In the next section we describe the employed
research methods. Next, we present the literature review through concept matrices
and discuss it around two different criteria. Before presenting the conclusions, we
discuss the contributions and limitations of our work, and the future research
directions.

2 Research Methodology

To select the relevant papers that scope the literature review, we followed the
methodology proposed by [15]. We performed an electronic search on the keyword
“business model(s)” included in the title or in the abstract of the article in the
Business Model in the IS Discipline: A Review … 117

chosen time period (from January 1st 2009 to June 1st 2014) spanning leading
journals in IS discipline (this criteria has been used in similar previous work, e.g.
[16]). As a first step we selected “A+” and “A” journals, according to the ranking
proposed by [17]. The journals selected were MIS Quarterly, Information Systems
Research, Journal of MIS, European Journal of Information Systems, Information
Systems Journal, Journal of the Association for Information Systems, and Journal
of Strategic Information Systems. The following databases were used to accelerate
the identification of relevant articles: ProQuest, EBSCO, ScienceDirect, and JSTOR
archive. In an effort to broaden the search beyond the original set of journals, we
also examined cited works of potential interest in selected IS conferences pro-
ceedings [18], such as ICIS, AMCIS, HICSS, as suggested by [19, 20]. We col-
lected a total of 108 articles for the defined IS domain.
To evaluate whether inclusion of an article was warranted in the literature
review, at least one the following criteria must be satisfied:
– The article concerns with, or is relevant to, the BM concept in IS;
– The article describes or identifies BM components [21];
– The article, while concerned with other research questions and topics, discusses,
directly or indirectly, to the BM concept.
Following the above criteria, from the 108 articles we selected 49 papers in the
analysis. Our literature review is organized around two different criteria one base on
the Unified BM Conceptual Framework [1] and the other based on the BM Concept
Hierarchy [4]. We then compile concept-matrices to present the results of the
analysis.
Using the Unified BM Conceptual Framework [1], we aim to define the BM
concept and the different BM components. The Unified BM Conceptual Framework
[1] defines the BM concept comprehensively, highlighting the major facets and
aspects related to the concept, and revealing important inter-relationships. The
framework comprises four fundamental aspects. First, it defines the BM primary
dimensions—value proposition (VP), value network (VN), value architecture (VA)
and value finance (VF)—and forms a complete ontological structure of the concept.
Second, the framework organizes the BM features, also called modeling principles
as guidelines that direct the modeling course of action of BMs. Third, it explains the
BM reach, as the BM is seen as an intermediate layer between business strategy and
ICT-enabled business processes. Fourth, the framework [1] explores three major
functions of the BM within digital organizations to shed light on the practical
meaning of the concept.
Using the BM Concept Hierarchy [4], we aim to identify the different BM types
and BM instances described in the literature selected. The selected relevant liter-
ature was also classified in three different categories, according to the BM Concept
Hierarchy presented by [4]. In the literature, the BM expression can stand for BM
definition or BM components definition, for specific BM types—i.e. freemium BM
[22], and concrete real word instances of BM—i.e. Kodak BM [23]. The BM
Concept category includes authors describing the BM concept as an abstract
overarching concept that can describe all real world businesses. Authors in this
118 G. Pozzi et al.

category substantiate the conceptual aspect. The category includes the definition of
what a BM is and of what belongs in it. The BM Types category includes authors
describing BM patterns, generic but having specific common characteristics, and
BM belonging to specific industries. The BM Instance category includes authors
that describe real world BM.
The next section presents the results of the analysis of the literature on the base
of the two selected criteria.

3 Synthesis of the Business Model Literature Review

3.1 Unified Business Model Conceptual Framework

The main contribution of scholars in the clarification of the BM concept concerns


the BM structure. The VP dimension implies the description of the value creation.
Researchers highlight (1) the description of the products/services offering for the
targeted market segment, and (2) the description of the value generation for the
company stakeholders. [3, 42] link to the VP the competitive advantage concept, as
a synonymous for long-term sustainability, in terms of differentiating from com-
petitors and maintaining a strategic business growth. According to [41], the selling
proposition and the product portfolio offered in the VP define the business strategy.
Researchers seem to agree on the VA dimension, identifying (1) activities and
processes, that allows the development of the product or the service offered;
(2) resources, tangible, intangible and human; (3) and competences, such expertise,
abilities, and skills necessary to execute the company’s BM. These components are
seen as internal capabilities of the firms. Only [7] include the business external
supplier network in the VA dimension. Activities and resources are also called
value configuration [9, 11, 28]. Few researchers [7, 11, 26, 30, 41] separate tech-
nology as a component of the VA and define it as a description of the technological
architecture, of the service platform, and of the development environment. Three
components form the VN dimension. (1) The infrastructure management includes
the corporate agreements with other companies and technological partners that are
necessary to create value. (2) Customer interface identifies the relationships a
company establishes with the customer segments. (3) The communication flow [3,
52] identifies the way a company reaches customers to deliver value.
VF represents the financial aspects of the BM concepts. Researchers identify the
cost and revenue models that sums up the monetary consequences of the means
employed in the BM and the way a company makes money through revenue flows.
The pricing model indicates the pricing of the product and the service offered [3, 11,
26, 41, 42]; distribution model indicates all the investments, costs and revenues that
are shared among participants [3, 42]; investment and funding source model indi-
cates the sources of the capitol to operate the BM [3, 26], and profit model results
from the pricing, revenue and cost model [3].
Business Model in the IS Discipline: A Review … 119

While the BM characteristic of being conceptual has been considered in the


contributions of several authors, the concept matrix in Table 1 highlights that the
other BM properties of dynamism and granularity identified [2, 26, 33, 38] are less
accepted in IS BM literature.
The functions that seen the BM as an intermediate layer and as an alignment
instrument between business strategy and business processes are often mentioned
by IS researchers [33, 41]. In this sense, BM concept is independent by both
business strategy and business processes. From one side, IS Researchers agree to
recognize that the BM derives from firm’s business strategic choices and that it is a
reflection of the firm’s business strategy itself [2, 39, 45, 52]. To the other, the BM
sets the basis for the design of the firm’s business processes. The BM is also a
power tool is align IT and business processes for a firm, in order to achieve the
strategic goals set [52]. The BM’ functions that identify the BM as an interceding
framework and as an asset of knowledge capital are not further considered by
authors in IS domain.

3.2 Business Model Concept Hierarchy

Literature categorized in the BM types’ column highlights different BM patterns.


A multi-sided platforms BM brings together two or more distinct but interde-
pendent groups of customers, facilitating the interaction between groups and cre-
ating values among them. A particular type of multi-sided platforms BM, is the
freemium BM, where at least one specific typology of customer is able to benefit
continuously from a free-of-charge offer. This particular characteristic mostly
affects the VF BM component: non-paying customers are financed by another part
of the BM or by the “premium” customer segment [26, 32, 52, 61]. In the software
industry, this BM could be (a) feature-limited, (b) time-limited, and (c) uniform
seeding [61]. Advertising is a well-established source of revenue [24, 26, 32]. The
free based on advertising is a particular form of the multi-sided platform pattern,
where one side is designed to offer user free content, and the other side to generate
revenue by selling space to advertisers. In terms of revenue model, advertisers
could be charged based on user actions, the so-called performance based model, or
based on the level of exposure, regardless of ad effectiveness [32]. Newspaper
publishing industry has experimented several BM revenue model: subscription
model, advertising model, transaction model, and bundled model [12].
In the literature analyzed, the software industry is the most considered in the IS
domain [25, 41, 62]. We find definition of software as a service (SaaS) BM and
platform as a service (PaaS) BM, where the VP includes the provision of systems,
IT infrastructure and consultancy, to public and private bodies, that facilitate better
communication and enable business process improvement and time and money
saving [5, 8, 10, 40]. Real world BM, such as Kodak BM [23], Google BM [60],
Alpha BM [40], Sedo Domain Parking BM [59], fall in the BM instances column
(Table 2).
Table 1 Unified business model conceptual framework concept matrix
120

No. Reference Value Value Value Value Conceptual Multi-level Dynamic Granular Coherent Intermediate Alignment Interceding Knowledge
proposition architecture network finance layer instrument framework capital
1 [24] X
2 [25] X X X X
3 [26] X X X X X X X
4 [27] X X X X X
5 [28] X X X X X
6 [29] X
7 [30] X X X X
8 [2] X X X X X X X
9 [31] X X X X
10 [32] X
11 [3] X X X X
12 [33] X X X X
13 [9] X X X X
14 [34] X X X X X
15 [35] X X X
16 [36] X X X X X
17 [37] X X X X
18 [38] X
19 [39] X X
20 [40] X X X X
21 [41] X X X X X X
22 [42] X X X X
23 [7] X X X X
24 [11] X X X X X
25 [8] X X X X
(continued)
G. Pozzi et al.
Table 1 (continued)
No. Reference Value Value Value Value Conceptual Multi-level Dynamic Granular Coherent Intermediate Alignment Interceding Knowledge
proposition architecture network finance layer instrument framework capital
26 [43] X X X
27 [44] X X X X X X
28 [45] X X X
29 [46] X X X X
30 [47] X X X X X
31 [48] X X X X X
32 [49] X X X X
33 [50] X X X X
34 [51] X X X X
Business Model in the IS Discipline: A Review …
121
122

Table 2 Business model concept hierarchy concept matrix


No. Reference BM BM BM No. Reference BM BM BM No. Reference BM BM BM
concept type instance concept type instance concept type instance
1 [1] X 18 [32] X 35 [12] X
2 [10] X 19 [62] X 36 [7] X
3 [23] X 20 [3] X 37 [11] X
4 [24] X 21 [33] X 38 [8] X
5 [25] X 22 [9] X X 39 [43] X X
6 [56] X X 23 [34] X 40 [44] X
7 [57] X 24 [35] X 41 [61] X
8 [58] X 25 [36] X 42 [45] X
9 [5] X 26 [37] X X 43 [47] X
10 [26] X X 27 [54] X 44 [46] X
11 [60] X 28 [38] X 45 [48] X
12 [27] X 29 [55] X 46 [49] X
13 [28] X 30 [39] X 47 [50] X
14 [29] X 31 [6] X X 48 [63] X
15 [30] X 32 [41] X 49 [51] X
16 [2] X 33 [42] X
17 [31] X 34 [59] X
G. Pozzi et al.
Business Model in the IS Discipline: A Review … 123

4 Discussion on Business Model

The analysis of the selected literature shows the existence of four main research
streams. The first research stream comprises a flow view of the BM and thus the
process of value exchange in a business is covered [26, 34, 35, 46]. The second
stream focuses on the constitutive characteristics of the BM and on their depen-
dencies and interdependencies [42, 44]. Our examination reveals that researchers
agree on the description of the BM elements as constitutive sub-parts that offer a
structured approach for standardized description, analysis and comparisons.
Although the different nomenclatures and different arrangements of the BM com-
ponents, we state that [1] framework best represents the state of art of this research
stream. Filling one of the major BM concept gap highlighted by [2, 42] analyze the
dependencies and interdependencies that exists between business model compo-
nents. The analysis shows that almost any BM component is interconnected with
the other, making all the relations between BM components structural and undis-
puted. The third stream focuses on the BM generation, design, implementation and
evaluation methods that allow the development and the correct management of BM
instance for a specific business sector [26, 27, 41, 52]. Current state of art shows
agreement among researchers that indicate in several studies steps and modalities
for BM development. For what concern BM management and evaluation
researchers agree on the usage of a measurement system based on key performance
indicators (KPI) to align BM and operation result. This research stream also
highlights the importance of the BM in the IS field, please refer to next section for a
more complete discussion. The last research stream focuses on the adoption and the
dynamics of the BM concept in a specific industry or business sector. Examples of
this research stream can be found i.e. in [37, 50, 63] contributions that show IS
researchers interest mainly in the digital industry.
We state that the BM concept is mainly represented wordily for what concern its
definition and component description. Indeed, in our analysis, we found four
recurring forms of BM representations (BMR). A BMR is generally a framework
for representing—even graphically—the model of a specific business.
(1) The STOF framework [53] highlights four different domains—service, tech-
nology, organization and finance—that generate value for business stakeholders.
STOF BM components can be easily associated with and/or included in the four
components of the [1] framework for the thematically similarity of their meaning.
(2) The e3-value [64, 65] identifies actors and the value exchanges which occur
among these actors. These value exchanges are valued financially to understand
which economic performance each actor in the network is likely to have. The STOF
framework and the e3-value representations are used in the 1st identified BM
research stream. (3) BM Canvas or BM Ontology [52] serves as extensive
meta-model with a wide scope of applications. It is used for business modeling and
business process structuring. The focus is on the VP, as the core of the BM.
The BM right side focuses on the client perspective and revenue model. The BM
left side focuses on the activities, partners and cost structure. (4) The Unified BM
124 G. Pozzi et al.

Conceptual Framework [1] highlights the BM concept comprehensively with


components, characteristics and functions. The BM Canvas and the Unified BM
Conceptual Framework are used in the 2nd identified BM research stream. Other
BMRs can be found in specific industries’ domains such those proposed by [37, 51]
and [7]. Derived from the BM Canvas, other BMRs that can be found in the
literature analyzed are examples by [9, 40, 50, 63]. Although we believe that all the
BMRs encountered in the literature are appropriate in supporting the understanding
of the BM and the value constellation, we state that there is not a standard and
diffuse unique BMR used in IS domain.

4.1 Why Is Important to Discuss BM in IS?

The BM concept helps increase the mutual understanding and integration between
the business strategy and IS domain [1, 2, 4, 52, 54]. The BM is able to create a
common language, helping the diffusion of shared comprehension. Understanding a
company BM facilitates and improves the choices of IS/ICT infrastructure, of its
application portfolio, of its role and structure. The BM helps in defining a com-
pany’s goals and facilitates the engineering requirements, as IS/ICT infrastructure
has to be aligned with those goals and the business processes. The BM concept
helps to identify the indicators of the executive IS for monitoring the strategy, based
on the financial, customer, internal business and innovation learning perspectives.
Through the BM concept, entrepreneurs have to be able to answer to the questions:
“Which technology infrastructure is required and crucial to the success of my
business model?”, “How can IT support the processes and the workflow required by
BM?”, and “What information flows, processes, and workflow does my BM
require?” [52].
IS research can positively impact the discipline of strategic planning, validating
conceptual framework from design thinking with objects and from socio-technical
systems that can improve strategic planning outcomes [66]. Design processes
technique and methodology, such as ideation, customer and user insights, visual
thinking, prototyping, storytelling and scenarios could significantly improve orga-
nization’s responses to strategic questions [52, 54]. IS can address the research in
computer-aided design (CAD) to assist the process of designing strategic man-
agement objects, such as the BM [2, 4, 52, 54]. Presenting the Business Model
Toolbox, [52] state that, through BM CAD assistance, entrepreneurs are able to
create, store, manipulate and track BMs, enabling deep comprehensive analysis,
remote collaboration and quick simulations. The BM presents views of business
logic underlying the entity’s existence that meets the need of different types of
users, such as firm’s stakeholders, firm’s internal resources, and external third
parties. Among these users, IS developers, as a subset of managers and
decision-makers, require a detailed depiction of the business that facilitates systems
requirements engineering, knowledge management, and workflow and process goal
definition [4].
Business Model in the IS Discipline: A Review … 125

The BM concept is strictly related with IT. The increasing appearance of BM


concept in literature from early years 2000s noticed by researchers [1, 2, 4, 67] was
mainly caused by growing usage of modern ICT based on internet infrastructure by
business activities. As result of this change, technology was able to shape and create
digital BM [1, 55].
IS plays a vital role in BM performances measurement [2, 67]. IS support the
monitoring of key performance indicators for each BM component, and thus, IS can
support the adaption of the BM components by enabling real-time support [33, 41].
In this sense, several studies present the usage of business model engineering tools
(BMET) developed to monitor the performances of the existing BM [26, 33, 41].
A BMET represents an assistance system that aims to help firms to compose their
new BM, as well as to monitor and carry out modifications on an existing BM based
on the definition of industry-specific KPIs [41]. A BMTE helps managers to
engineer their BM in order to discover strengths, weakness, opportunities, and
threats predict sales and profit levels in different market scenarios.

5 Contributions and Limitations

As a result of the literature review, we are able to overcome several previous BM


research gaps [2]. The discussion on the BM literature review focuses on the
different BM research streams and indicates their current state of the art. We
contribute in classifying the selected literature according to two different criteria.
From the first classification, we state that BM research on components is
well-addressed and mostly convergent on four basic BM dimensions, such as VP,
VN, VA and VF. The second classification allows us to analyze the different BM
types and BM instance that can be found in the IS domain. We also contribute in
highlighting papers that discuss of the different BM components and their inter-
dependencies, as well as those articles that highlight BM design and KPI evaluation
methods, and BM management through BMET. We also outline the importance of
the BM concept in the IS domain, theoretically contributing in understanding the
connections between the IS field and BM research.
Nevertheless, many research gaps are still present. We notice an absence of a
defined and standardize level of abstraction for BM design; a limited insight on the
BM users for an appropriate BM design, management and evaluation; an absence of
a unified and standardized BMR; a presence of software-based tools (BMET) for
management of the BM only in the software industries, and an insufficient knowl-
edge on BM innovation in the IS field. In other terms, possible research questions
could be: “Which is the correct level of abstraction to be used in a BM description?”
“Which are the BM users?” “Which is the dominant BMR to be used in the IS field?”
“How these tools can be exported in other industries?” and “What is the definition of
BM innovation? How much of the BM monitoring and real time adaption now
126 G. Pozzi et al.

possible thanks to BMET is to be classified as BM innovation?”. These need to be


tackled and should also serve as guidelines for future research to gain well-founded
knowledge on the BM concept and to better structure this field of research.

6 Conclusions

This paper clarifies the BM concept as a follow up of the [1] literature review. The
authors, following a structured methodology, reviewed the IS-related literature from
year 2009 to 2014, and deeply analyzed 49 papers. The authors classified current
literature according to two frameworks that highlight different aspects of the BM
concept, such as BM components, characteristics, functions, and typologies. The
result of the analysis shows the current state of the art of the BM research in the IS
field. The paper presents the research gaps that have been closed and the others are
still existent in the field.

References

1. Al-debei Mutaz M., Avison, D.: Developing a unified framework of the business model
concept. Eur. J. Inf. Syst. 19, 359–376 (2010)
2. Burkhart, T., Krumeich, J., Werth, D., Loos, P.: Analyzing the business model concept—a
comprehensive classification of literature. In: Proceedings of ICIS 2011 (2011)
3. Krumeich, J., Burkhart, T., Werth, D., Loos, P.: Towards a component-based description of
business models: a state-of-the-art analysis. In: Proceedings of AMCIS 2012 (2012)
4. Osterwalder, A., Pigneur, Y., Tucci, C.L.: Clarifying business models: origins, present, and
future of the concept. Commun. Assoc. Inf. Syst. 16, 1–25 (2005)
5. Demirkan, H., Cheng, H.K., Bandyopadhyay, S.: Coordination Strategies in an SaaS Supply
Chain. J. Manag. Inf. Syst. 26, 119–143 (2010)
6. Giessmann, A., Fritz, A., Caton, S., Legner, C.: A method for simulating cloud business
models: a case study on platform as a service. In: Proceedings of ECIS 2013 Completed
Research (2013)
7. Labes, S., Erek, K., Zarnekow, R.: Common patterns of cloud business models. In:
Proceedings of AMCIS 2013 (2013)
8. Morgan, L., Conboy, K.: Value Creation in the Cloud: Understanding Business Model Factors
Affecting Value of Cloud Computing. In: Proceedings of AMCIS 2013 (2013)
9. Rensmann, B.: Two-sided cybermediary platforms: the case of hotel.de. In: Proceedings of
AMCIS 2012 (2012)
10. Susarla, A., Barua, A., Whinston, A.B.: A transaction cost perspective of the software as a
service business model. J. Manag. Inf. Syst. 26, 205–240 (2009)
11. Malsbender, A., Beverungen, D., Voigt, M., Becker, J.: Capitalizing on social media
analysis—insights from an online review on business models. In: Proceedings of AMCIS 2013
(2013)
12. Oechslein, O., Hess, T.: Paying for news: opportunities for a new business model through
personalized news aggregators (PNAs). In: Proceedings of AMCIS 2013 (2013)
13. Pateli, A.G., Giaglis, G.M.: A research framework for analysing eBusiness models. Eur. J. Inf.
Syst. 13, 302–314 (2004)
Business Model in the IS Discipline: A Review … 127

14. Shafer, S.M., Smith, H.J., Linder, J.C.: The power of business models. Bus. Horiz. 48, 199–
207 (2005)
15. Webster, J., Watson, R.T.: Analyzing the past to prepare for the future: writing a literature
review. MIS Q. 26, 13–23 (2002)
16. Merali, Y., Papadopoulos, T., Nadkarni, T.: Information systems strategy: past, present,
future? J. Strateg. Inf. Syst. 21, 125–153 (2012)
17. Lowry, P., Moody, G., Gaskin, J., Galletta, D., Humpherys, S., Barlow, J., Wilson, D.:
Evaluating journal quality and the association for information systems senior scholars’ journal
basket via bibliometric measures: do expert journal assessments add value? Manag. Inf. Syst.
Q. 37, 993–1012 (2013)
18. Webster, J., Watson, R.T.: Analyzing the past to prepare for the future: writing a literature
review. MIS Q. 26, R13 (2002)
19. Chan, H.C., Kim, H.-W., Tan, C.W.: Information systems citation patterns from international
conference on information systems articles. J. Am. Soc. Inf. Sci. Technol. 57, 1263–1274
(2006)
20. Walstrom, K.A., Hardgrave, B.C.: Forums for information systems scholars: III. Inf. Manage.
39, 117–124 (2001)
21. Al-debei Mutaz, M., El-Haddadeh, R., Avison, D.: Defining the business model in the new
world of digital business. In: Proceedings of AMCIS 2008 (2008)
22. Leonardi, P.M.: When flexible routines meet flexible technologies: affordance, constraint, and
the imbrication of human and material agencies. MIS Q. 35, 147–168 (2011)
23. Lucas Jr, H.C., Goh, J.M.: Disruptive technology: how Kodak missed the digital photography
revolution. J. Strateg. Inf. Syst. 18, 46–55 (2009)
24. Clemons, E.K.: Business models for monetizing internet applications and web sites:
experience, theory, and predictions. J. Manag. Inf. Syst. 26, 15–41 (2009)
25. Brockmann, C., Gronau, N.: Business models of ERP system providers. In: Proceedings of
AMCIS 2009 (2009)
26. Kijl, B., Boersma, D.: Developing a business model engineering & experimentation tool—the
quest for scalable lollapalooza confluence patterns. In: Proceedings of AMCIS 2010 (2010)
27. Kijl, B., Nieuwenhuis, B.: Deploying a Telerehabilitation service innovation: an early stage
business model engineering approach. In: Proceedings of 47th Hawaii International
Conference on System Sciences (2014)
28. Feller, J., Finnegan, P., Nilsson, O.: Open innovation and public administration:
transformational typologies and business model impacts. Eur. J. Inf. Syst. 20, 358–374 (2011)
29. Tay, K.B., Chelliah, J.: Disintermediation of traditional chemical intermediary roles in the
Electronic Business-to-Business (e-B2B) exchange world. J. Strateg. Inf. Syst. 20, 217–231
(2011)
30. Zolnowski, A., Böhmann, T.: Business modeling for services: Current state and research
perspectives. In: Proceedings of AMCIS 2011 Submissions (2011)
31. Raivio, Y., Luukkainen, S., Seppala, S.: Towards open telco—business models of api
management providers. In: Proceedings of 47th Hawaii International Conference on System
Sciences (2014)
32. Lin, M., Ke, X., Whinston, A.B.: Vertical differentiation and a comparison of online
advertising models. J. Manag. Inf. Syst. 29, 195–236 (2012)
33. Di Valentin, C., Burkhart, T., Vanderhaeghen, D., Werth, D., Loos, P.: Towards a framework
for transforming business models into business processes. In: Proceedings of AMCIS 2012
(2012)
34. Moreno, C., Tizon, N., Preda, M.: Mobile cloud convergence in GaaS: a business model
proposition. In:Proceedings of 45th Hawaii International Conference on System Sciences
(2012)
35. Kundisch, D., John, T.: Business model representation incorporating real options: an extension
of e3-Value. In: Proceedings of 45th Hawaii International Conference on System Sciences
(2012)
128 G. Pozzi et al.

36. Buder, J., Felden, C.: Evaluating Business Models: Evidence on user understanding and
impact to BPM correspondence. In: Proceedings of 45th Hawaii International Conference on
System Sciences (2012)
37. Schief, M., Buxmann, P.: Business models in the software industry. In: Proceedings of 45th
Hawaii International Conference on System Sciences (2012)
38. Keen, P., Williams, R.: Value architectures for digital business: beyond the business model.
MIS Q. 37, 643–647 (2013)
39. Sitoh, M., Pan, S., Zheng, X., Chen, H.: Information system strategy for opportunity discovery
and exploitation: insights from business model transformation. In: Proceedings of ICIS (2013)
40. Giessmann, A., Legner, C.: Designing business models for platform as a service: towards a
design theory. In: Proceedings of ICIS (2013)
41. Di Valentin, C., Emrich, A., Werth, D., Loos, P.: Architecture and Implementation of a
decision support system for software industry business models. In: Proceedings of AMCIS
2013 (2013)
42. Krumeich, J., Werth, D., Loos, P.: Interdependencies between business model components—a
literature analysis. In: Proceedings of AMCIS (2013)
43. Bonakdar, A., Weiblen, T., Di Valentin, C., Zeissner, T., Pussep, A., Schief, M.:
Transformative influence of business processes on the business model: classifying the state
of the practice in the software industry. In: Proceedings of 46th Hawaii International
Conference on System Sciences (2013)
44. Zolnowski, A., Bohmann, T.: Customer integration in service business models. In:
Proceedings of 46th Hawaii International Conference on System Sciences (2013)
45. Rai, A., Tang, X.: Information technology-enabled business models: a conceptual framework
and a coevolution perspective for future research. Inf. Syst. Res. 25, 1–14, 202 (2014)
46. Ryschka, S., Tonn, J., Ha, K.-H., Bick, M.: Investigating location-based services from a
business model perspective. In: Proceedings of 47th Hawaii International Conference on
System Sciences (2014)
47. Fritscher, B., Pigneur, Y.: Computer aided business model design: analysis of key features
adopted by users. In: Proceedings of 47th Hawaii International Conference on System
Sciences (2014)
48. Kuebel, H., Limbach, F., Zarnekow, R.: Business models of developer platforms in the
telecommunications industry—an explorative case study analysis. In: Proceedings of 47th
Hawaii International Conference on System Sciences (2014)
49. Zolnowski, A., Weiss, C., Bohmann, T.: Representing service business models with the
service business model canvas—the case of a mobile payment service in the retail industry. In:
Proceedings of 47th Hawaii International Conference on System Sciences (2014)
50. Ghezzi, A., Dramitinos, M., Agiatzidou, E., Johanses, F.T., Losethagen, H., Rangone, A.,
Balocco, R.: Internet interconnection techno-economics: a proposal for assured quality
services and business models. In: Proceedings of 47th Hawaii International Conference on
System Sciences (2014)
51. Lindman, J., Kinnari, T., Rossi, M.: Industrial open data: case studies of early open data
entrepreneurs. In: Proceedings of 47th Hawaii International Conference on System Sciences
(2014)
52. Osterwalder, A., Pigneur, Y.: Business Model Generation: A Handbook For Visionaries,
Game Changers, and Challengers. Wiley, Hoboken, NJ (2010)
53. Bouwman, H., Meng Zhengjia, van der Duin, P., Limonard, S.: A business model for IPTV
service: a dynamic framework. Information 10, 22–38 (2008)
54. Osterwalder, A., Pigneur, Y.: Designing business models and similar strategic objects: the
contribution of IS. J. Assoc. Inf. Syst. 14, 237–244 (2013)
55. Oestreicher-Singer, G., Zalmanson, L.: Content or community? a digital business strategy for
content providers in the social age. MIS Q. 37, 591–616 (2013)
56. Hochstein, A., Schwinn, A., Brenner, W.: Business opportunities with web services in the case
of Ebay. In: Proceedings of 47th Hawaii International Conference on System Sciences (2014)
Business Model in the IS Discipline: A Review … 129

57. Chen, P.-Y., Chou, Y.-C., Kauffman, R.J.: Community-based recommender systems:
analyzing business models from a systems operator’s perspective. In: Proceedings of 47th
Hawaii International Conference on System Sciences (2014)
58. Baumoel, U., Georgi, S., Ickler, H., Jung, R.: Design of new business models for service
integrators by creating information-driven value webs based on customers’ collective
intelligence. In: Proceedings of 47th Hawaii International Conference on System Sciences
(2014)
59. Loebbecke, C., Tuunainen, V.: Extending successful eBusiness models to the mobile internet:
the case of Sedo’s domain parking. In: Proceedings of AMCIS (2013)
60. Clemons, E.K., Madhani, N.: Regulation of digital businesses with natural monopolies or
third-party payment business models: antitrust lessons from the analysis of google. J. Manag.
Inf. Syst. 27, 43–80 (2010)
61. Niculescu, M.F., Wu, D.J.: Economics of free under perpetual licensing: implications for the
software industry. Inf. Syst. Res. 25(173–199), 201–203 (2014)
62. Deodhar, S.J., Saxena, K.B.C., Gupta, R.K., Ruohonen, M.: Strategies for software-based
hybrid business models. J. Strateg. Inf. Syst. 21, 274–294 (2012)
63. Giessmann, A., Kyas, P., Tyrvainen, P., Stanoevska, K.: Towards a better understanding of the
dynamics of platform as a service business models. In: Proceedings of 47th Hawaii
International Conference on System Sciences (2014)
64. Gordijn, J., Akkermans, H.: A conceptual value modeling approach for e-Business
development. In: Proceedings of KCAP 2001 Workshop WS2 Knowl. E-Bus, pp. 27–38
(2001)
65. Gordijn, J., Akkermans, H.: Designing and evaluating e-Business models. IEE Intell. Syst. 16
(4), 11–17 (2001)
66. Beath, C., Berente, N., Gallivan, M.J., Lyytinen, K.: Expanding the frontiers of information
systems research: introduction to the special issue. J. Assoc. Inf. Syst. 14, (2013)
67. Lambert, S.: A conceptual framework for business model research. In: Proceedings of BLED
(2008)
IS Governance, Agility and Strategic
Flexibility in Multi-approaches Based
Management Companies

Mohamed Makhlouf and Oihab Allal-Chérif

Abstract The in-depth case-study with 2 years of participant observation in a very


large Telecommunication Operator that has implemented several process approa-
ches shows that, despite the benefits that these approaches can bring to this
enterprise, significant problems arise, particularly regarding information systems
governance, agility and strategic flexibility. Rich literature on process approaches
teaches us the benefits of each approach, however, the impact on IS governance,
agility and strategic flexibility of the implementation of multiple process approa-
ches within the same company has never been studied.

Keywords Information systems governance 


Strategic flexibility  Agility 

Process based management Telecommunication operator

1 Introduction

In a context where changes are perpetual and multidimensional, the companies must
adapt quickly and consider this turbulent environment as an opportunity and not as
a threat. In order to grow or even to survive they need to increase their competi-
tiveness, improve their results and strengthen their agility and strategic flexibility.
Given the constantly changing environment, the ability of a company to change
direction quickly and reconfigure its strategy [27] is essential to succeed in realizing
a sustainable competitive advantage [20]. In other words, companies must have a
strategic flexibility [21].

M. Makhlouf (&)  O. Allal-Chérif


KEDGE Business School, 680 Cours de la Libération, 33400 Talence, France
e-mail: mohamed.makhlouf@kedgebs.com
O. Allal-Chérif
e-mail: oihab@kedgebs.com

© Springer International Publishing Switzerland 2016 131


C. Rossignoli et al. (eds.), Organizational Innovation and Change,
Lecture Notes in Information Systems and Organisation 13,
DOI 10.1007/978-3-319-22921-8_11
132 M. Makhlouf and O. Allal-Chérif

The process approaches stands as a privileged solution to meet these challenges.


These approaches having specific objectives; it is possible to observe in some large
companies many projects of implementation of several different process
approaches.
No study seems to have been made to find out what are the consequences of
simultaneous implementation of different process approaches in enterprise.
The objective of this research is to analyze in detail this “process” phenomenon
from a case study of a company that undertook concurrent implementation of
process approaches, and show the contributions and problems resulting from such
implementation especially concerning governance, agility and strategic flexibility.
Communication is structured as follows. The first part deals with the problem
and methodology of research. The second part presents the history of process
approaches in the observed company. The third part analyzes the contributions of
the implementation of these approaches and problems resulting concerning gov-
ernance, agility and strategic flexibility. The conclusion indicates the perspective
for future research.

2 The Research Problem

2.1 Literature Review

The ISO9000 standards define the process approach as: the representation of an
organization or project as a system of processes, each production of the organi-
zation as the result of a process, and each activity of the organization should be
represented as a process.
In the last decades the “approach of the organization has profoundly altered the
process vision” [26] and “several process approaches have emerged” [5]. Indeed,
“Dealing with business processes for the development of organisations and man-
agers is a trend of growing popularity worldwide. Many different concepts, methods
and techniques have been elaborated over time” [17]. Moreover, most of the
activities of an organization (more than 90 % in some cases) can be described in
terms of processes [3].
The table describes some of these approaches (Table 1):
The “Corporate Governance” has been the subject of plethoric literature in
economics and management sciences since 1980 [25]. IT governance is the trans-
position to the IT level of the principles of corporate governance [15]. In Cobit [12]
five basic principles characterize the IT governance: Strategic Alignment [19, 35],
Value delivery [11], Performance measurement [16], Resource Management [6, 7],
and Risk management [16].
Strategic flexibility reflects the ability of a company to respond continuously to
unanticipated changes and to adjust to unexpected consequences of predictable
changes [24, 27]. A review of the literature defines the agility as the ability to adapt
IS Governance, Agility and Strategic Flexibility … 133

Table 1 Examples of process approaches


Approach Literature
Total quality management (TQM) • Elaborated by Feigenbaum in 1951 [14]
• Its importance has increased over the last 20 years
[38]
• Creates a competitive advantage and improves
organizational performance [30]
• Known with [13]: The quality control process and
[22]. The quality management
Business process management • Managing business processes as any other asset [4]
(BPM) • Centres on system framework, process model and
management of process execution [34]
• A system to manage business processes: BPMS: An
environment of modeling and execution [33]
Knowledge management (KM) Based on Knowledge Management Systems (KMS):
IT based system developed to support the
organization knowledge management behavior [1]
Project portfolio management An enterprise-wide process that involves a wide range
of participants, and extends deeply into the hierarchy
(Yuming and Quan 2007)
Resource based view RBV and • The organization is seen as a set of resources [7]
resource based management RBM • The information systems are precious and strategic
resources [32]
• Resources are associated with organizational
performance and confer competitive benefit [6]
• RBV can be used to evaluate the relationship
between information systems and business
performance
Management by activities and costs Introduced in 1975 by Robert Kaplan: creation of a
(ABC activity-based costing) hierarchy of activities and attribution of costs
according to the activities involved [10]

reactively to change [2, 9, 29]. Five principle categories characterize the agility and
strategic flexibility and can be summarized by the ability to adapt in a reactive and
adequate manner to change: Responsiveness and Reactivity [18, 31], Competency
and employees’ adaptability [8, 23], Adaptability and Re-configurability [28],
Quickness and speed [2], Operational Agility and Process-Centric View [2, 29].
Each of the different process approaches has specific objectives and these
approaches can be implemented simultaneously in the company. The literature
teaches us the benefits of each approach. But given this literature, nobody has asked
the question of what happens if these approaches were implemented simultaneously
in the same company and what are the impacts on governance, agility and strategic
flexibility.
134 M. Makhlouf and O. Allal-Chérif

2.2 Research Question

This research studies the consequences of the simultaneous implementation of


several process approaches in the company and the impacts on governance, agility
and strategic flexibility.

2.3 Research Methodology

In this research we conducted an in-depth case study. And we privileged the par-
ticipant observation in which the research is conducted within the company itself,
and where the status of the researcher is not highlighted.
Multiple sources of evidence were used, with a triangulation of these sources of
evidence [37]. The case study is based on observation, interviews with the different
actors of the company, as well as an exploratory, longitudinal and situational
research in the company. This allowed in addition to the characterization of the
existent situation, to make a characterization of the company and the context in
which it operates, whether from an organizational perspective, business or pro-
cesses, interest and power games, and a cartographic detail of what exists.
A detailed “analytical analysis” [36] of all documents and information available
was also performed. Indeed, this study led to analyze the specifications, reports,
trade figures, interest rates, indicators (quality, performance…). And a detailed
analytical analysis of architecture, organization and processes in place, and
approaches implemented in different directions. And an analysis of the context of
the ideas and methodologies and the identification of formal and potentially
re-usable documents, that allowed to achieve a “transversal analysis” [37].

2.4 Field of the Study

The chosen field of study is a large European company, particularly subject to


flexibility requirements. It is one of the greatest telecommunication operators in
Europe. For confidentiality reason, we will call it TELKOM in this document.
It is global and integrated operator. Indeed, it owns its infrastructure and its
operations are in fixed, mobile, Internet and television, and soon it will provide
“Quadruple Play”. It is present in all segments of the telecommunications market,
namely the public market, the enterprises market and the operator to operators
market. Its turnover is about tens of billions Euros, and has a customer base with
tens of millions of subscribers. Its employees are tens of thousands (internal and
external), few thousands are in the Information Systems Direction, which has
several hundred million Euros as operating budget and several hundred projects
every year. This operator invests several millions Euros annually.
IS Governance, Agility and Strategic Flexibility … 135

Participant observation within this operator has lasted 2 years, during which the
researcher had direct contact with employees from all its Directions as well as its
key managers and conducted a hundred interviews with the different actors of the
company (ranging from 1 to 3 h) and conducted twenty workshops with the
different actors of the different process approaches. He conducted an in depth
documentary exploration of several tens of Giga bytes of documentations spread
over thousands of documents that have been put at his disposal and related to the
different aspects of what goes on in this company.

3 Process Approaches in TELKOM Since 2005

3.1 History

After the phase of construction, this company has gone to an extremely rapid
growth phase, which does not allow it to do an evaluation of its practices. The
priorities were to follow the growth by hiring massively and investing in infra-
structures to increase network capacity in order to receive the large number of
clients growing exponentially.
The information system is a tangle of applications that are added to each other to
meet the different needs, constantly renewed. Indeed, each new requirement cor-
responds to new processes or new applications. Add to that the instability and the
volatility of the organization, with endless reorganizations. Even nowadays, there is
at least one re-organization each year.

3.2 The Beginning of the Process Projects

After the rapid growth, this company has become a very large group. And it was not
possible to continue to manage a large structure without going through processes
and a strict formalism. There is thus more and more creations of entities that have
missions to perform consolidations of projects portfolios, manage studies and pilot
projects of major information systems, improve the quality of services, streamline
investments, manage resources, etc. These missions were conducted, very often
redundantly, in several Directions. We can found them in the Directions of the
network, IS, services, customer service, marketing and commerce, customer ser-
vice, the DAF.
In few years, several directions of TELKOM have launched several process
approaches implementation projects. The table describes these projects (Table 2):
136 M. Makhlouf and O. Allal-Chérif

Table 2 Launch of process approaches in TELKOM Directions


Direction Type of approach
Engineering direction in the IS direction Project portfolio management:
PPM
Mobile exploitation direction in the IS direction Resources based management:
RBM
Customer service direction Knowledge management on the
mobile
Sales management direction in the marketing and Activities based management:
commerce direction ABM
HR direction RBM
Enterprises direction Knowledge management: KM
ABM
Services exploitation direction in the IS direction RBM
Customer service directions KM in the fixed
Optical fibers direction PPM
ABM
Financial direction DAF Costs based management: CBM
Fixed exploitation direction in the IS direction RBM
Operations direction in the IS direction CMMI
Quality direction in the marketing and commerce direction Total quality management: TQM
Enterprises direction PPM

3.3 Analysis of the Implementation of These Approaches

The implementation of these process approaches in this company has brought many
benefits. Several objectives among the expected objectives have been achieved.
This development has also generated a lot of problems.
The following paragraphs detail some of these benefits and problems.

3.3.1 Benefits

Among the benefits we can observe or deduce from our thorough analysis and
observations, is that the implementation of the approaches CMMI, and Project
Portfolio Management has facilitated the piloting of activities. The preparation,
planning and execution are more easily achievable; consolidations are more reliable
and deliverables are of higher quality. The setting up of a projects process has
enabled the definition of the project portfolio management life cycle: The steps of
preparation, planning and execution of projects portfolio are integrated into the
IS Governance, Agility and Strategic Flexibility … 137

projects portfolio management. And evaluation and prioritization of major projects:


in fact, evaluations (ROI, risk management, strategic contribution) and prioritiza-
tion of major projects are made in the governance instances of the process of project
portfolio management.
We can also mention the establishment of a tool of cartography and design of
information systems and the steps that have been conducted to represent business
processes in different Directions and that allowed to educate employees about the
importance of mapping all processes and application systems.
The implementation of the Cost Based Management approach has allowed
calculating and analyzing the development of cost prices of final services. The
implementation of this approach has also given the possibility to measure the
impact of technical choices and economic unit costs, and analyze the sensitivity to
changes in use and the annual evolutions.
The implementation of the Resources Based Management approach RBM at the
Resources Direction has allowed having efficacious HR processes for the admin-
istrative management of employees and contractors. And has enabled more easily
manage the very frequent reorganizations. This has helped reducing the impact of
organizational instability on operational projects and employees’ performance. The
development and establishment of RBM in the Directions of Exploitation and
Production, the Networks Directions… has allowed establishing a detailed vision of
applications and infrastructures, which allowed each of these directions to have a
detailed vision of its material resources and manage them much more efficiently and
to reduce error rates and improve the quality of service.
Furthermore, the implementation of Total Quality Management approach in the
Quality Direction, has played an important role in reducing the churn of customers
which dropped by about 30 % in 2 years.

3.3.2 Problems

Simultaneous implementation of these process approaches in this company is the


source of many problems. Indeed, the implementation of these approaches was not
done in a comprehensive strategic alignment goal. We used the best practices
associated to these approaches without adaptation to the context. This implemen-
tation was made in trying to meet the specific needs and not as part of a common
vision, And without checking if they are compatible with each other. This has
spawned a number of important problems regarding governance, agility and stra-
tegic flexibility.

3.3.3 Problems Concerning Business Process Management

Among the consequences of the simultaneous and independent implementation of


several process approaches in TELKOM, the lack of centralized view of business
processes. In 2005, in the IS Direction, the architecture team has attempted a
138 M. Makhlouf and O. Allal-Chérif

Table 3 Problems concerning Governance in TELKOM


Governance Problems
Strategic alignment • Interpretation of the company’s strategy varies from one direction to
another
• No measure of correlation between strategy and projects
• Impossibility of budget reallocation in the IS directions to meet
strategic issues evolutions or projects costing changes
• Each sub direction of the IS direction has its own internal strategy
concerning the IS
• The IS is constructed gradually as business strategy evolves and not as
a part of this strategy
• The organization is divided functionally privileging specialization
rather than coordination
• Communication is degraded between business teams and IS teams
• Decisions on IS are not centralized
• Lack of pro-activity and anticipation in the construction of the IS
• The IS has become more complex due to several abrupt changes in
business strategy over the past years
• Many difficulties to bring coherence in the solutions
• Failure of several IS total remake projects
Value delivery • Difficulties to act quickly in response to market evolutions
• Lack of anticipation regarding the management capability, security,
economic efficiency in terms of infrastructure to evolve rapidly at lower
and competitive cost
• Upstream, implementation costs are insufficiently challenged
• The granularity of monitoring and of budget lines is not fine enough
and large projects are costed approximately
• No link between the provisional budget and the engaged expenditures
Performance • No performance optimization of projects portfolios
measurement • Lack of capitalization in the life cycle of projects portfolios
management
• Lack of assessment on project portfolios
• The budget granularity does not descend to the level of budgets by
direction and by applicative domain
Resources • Lack of human resources operational availability management
management • Communication of roles is not homogeneous
• Significant increase in the charge of piloting the subcontractors
• Increase in cases of nervous breakdowns
• Very important loss of internal knowledge of outsourced components
• Each technical direction has its own material and applicative resources
referential
• Each customer service direction has its own knowledge management
and lack of knowledge management elsewhere
(continued)
IS Governance, Agility and Strategic Flexibility … 139

Table 3 (continued)
Governance Problems
Risk Management • There is not any global and unique view of the projects portfolios in
which there are objectives achieved, risks managed, created values,
deliverables achieved and problems encountered
• The “support” to change management for the projects portfolios
management is not available
• There is not any formalized process for updating cost based
management data
• No visibility or supervision by the IS direction on hundreds of
applicative systems realized outside the IS direction and not following
the projects portfolios management process
• Important and continuous loss of many business knowledge essential
to the continuity of the activity of the company
• Important and growing psychosocial risks

cartography of the processes using a software engineering workshop, one of the


leaders of the market. Much of the processes were modeled after this major project
that lasted several months. But this mapping has not been continued. Indeed, the
vision that was mapped corresponds only to the IS vision and does not take into
account the visions of the different business units. In 2006, the IS Direction has
given up progressively updating the cartography which process was very cum-
bersome. The software engineering workshop is now rarely and exclusively used by
the architects of the IS Direction. Each Direction manages somehow its processes in
its own way, and the definition and description of various business processes of the
company changes in every direction. It is very common to have, for the same
business process, different descriptions (inputs/outputs, activities, applications and
actors…) in each direction. And representation of the processes varies from one
direction to another, and sometimes even in the same direction the processes are not
represented in the same way, it depends either from the one who made the repre-
sentation, or from the one for whom the representation is made. There are processes
modeled as flowcharts in Visio. There are also schemas done using PowerPoint.
And much of the processes remain in the form of knowledge tacitly held by various
business solution management teams. This difference of view has been the source of
a lot of ambiguities, during the projects’ impact analysis, between business teams
and IS teams.

3.3.4 Problems Concerning Governance

We will detail the most important problems concerning governance, caused by the
simultaneous introduction of these approaches in TELKOM by grouping them
according to the five basic principles of IT governance: strategic alignment, value
delivery, performance measurement, resources management and risk management
(Table 3):
140 M. Makhlouf and O. Allal-Chérif

Table 4 Problems concerning Agility and Strategic Flexibility in TELKOM


Agility and strategic Problems
flexibility
Responsiveness and • Difficulties to improve modularity and convergence of
reactivity information systems
• Lack of responsiveness and autonomy by my market
• Dependencies increase as the IS evolves without
conservation of pertinent mutualizations
• Many applications were implemented by the business teams
outside the projects portfolios management and without
engineering of IS teams
Competency and • The information is not always coherent between the different
adaptability of employees teams
• Lack of communication
• Lack of knowledge of architecture, data, applicative systems,
flows and processes
• No feedback on projects
• Limited sharing of information vertically and horizontally
• Relations between the techniques and IS directions and
business directions are not healthy
• Learning is difficult and limited sharing of knowledge
• Important overload on some teams
• Loss of knowledge in case of outsourcing or when some key
employees leave the company
Adaptability and • Multiplications of IS remake projects and failure of many of
re-configurability these projects
• Budget overrun of IS remake projects
• Business needs changes too frequently
• Problems of coherence and duplication
• Presence of a large number of applications implemented
without engineering of IS teams
Quickness and speed • Difficulties to bring consistency in the solutions and urgent
projects
• Difficulties to act quickly in response to changing market
• Difficulties in transversal functioning
• Absence in the project process of short cycles for
development projects
Operational agility and • Difficulties in transversal functioning
process centric-view • Partial internal control of functional and technical
architecture of information systems
• Knowledge of processes is not sufficient enough for the
studies of impacts of processes evolution on the IS, and the
end to end diagnosis of malfunctioning
• Lack of leeway for unexpected workload
IS Governance, Agility and Strategic Flexibility … 141

3.3.5 Problems Concerning Agility and Strategic Flexibility

We will detail the most important problems concerning agility and strategic flex-
ibility, caused by the simultaneous introduction of these approaches in TELKOM,
by grouping them into five main categories that characterize the agility and flex-
ibility strategic. Namely: responsiveness and reactivity, competency and adapt-
ability of employees, adaptability and re-configurability, quickness and speed, and
finally, operational agility and process-centric view. That can be summarized by the
ability to adapt in a reactive and adequate manner to change (Table 4):

4 Conclusion

The implementation of these process approaches within TELKOM has allowed the
achievement of several objectives and obtaining improvements on several fronts.
But the implementation of these approaches was done concurrently and in the
absence of a common and global vision. In this context, these approaches have very
rapidly shown their limits, and the superposition of these approaches has been the
cause of several malfunctions or brakes for an optimal operation of the company
and its information systems. The isolated and concurrent application of these
approaches has increased the complexity of the IS, increased costs, and weakened
the performance parameters.
If one steps back, one can observe that these implementations are all trying to
follow the emerging organizational and technological currents without hardly
consider a goal of strategic alignment of information systems with business strat-
egy, or even the definition of this strategy. This is due to the lack of effort of
conceptualization and consolidation of different interpretations of the strategy of the
company on the field. It strives for interpretation, and neglect the consolidation of
operational practices. And since there is no return about the implementation of these
approaches to confront it to the company’s strategy, it inevitably leads to a problem
of strategic alignment, governance, agility and strategic flexibility.
To consolidate these results, the research presented here will be completed by other
case studies. Furthermore, the analysis showed the potential value of a global process
approach of governance of information, which will be the next step of the research.

References

1. Alavi, M., Leidner, D.E.: Review: knowledge management and knowledge management
systems: conceptual foundations and research issues. MIS. Quart. 25(1), 107–136 (2001)
2. Almahamidi, S., Awwad, A., McAdams, A.C.: Effects of organizational agility and knowledge
sharing on competitive advantage: an empirical study in Jordan. Int. J. Manag. 27(3), 387–404
(2010)
142 M. Makhlouf and O. Allal-Chérif

3. Amaravadi, C.S., Lee, I.: The dimensions of process knowledge. Knowl. Process Manag. 12
(1), 65–76 (2005)
4. Arora, T., Nirpase, A.: Next generation business process management: a paradigm shift. IEEE
Congress on Services Part I, 6–11 July 2008, p. 81, (2010)
5. Baker, G., Maddux, H.: Enhancing organizational performance facilitating the critical
transition to a process view of management. Sam Adv. Manag. J. 7(4), 40–60 autumn (2005)
6. Barney, J.: Firm resources and sustained competitive advantage. J. Manag. 17(1), 99–120
(1991)
7. Bharadwaj, A.: A resource-based perspective on information technology capability and firm
performance: an empirical investigation. MIS. Quart. 24(1), 169–196 (2000)
8. Bhattacharya, M., Gibson, D.E.: The effects of flexibility in employee skills, employee
behaviors, and human resource practices on firm performance. Int. J. Manag. 31(4), 622–640
(2005)
9. Burgess, T.F.: Making the leap to agility: defining and achieving agile manufacturing through
business process redesign and business network redesign. Int. J. Oper. Prod. Manag. 14(11),
23 (1994)
10. Cooper, R., Kaplan, R.S.: Profit Priorities from Activity-Based Costing. Harv. Bus. Rev.
(2000)
11. Corbel, P., Jean-Philippe Denis, J-Ph., Taha, R.: Systèmes d’information, innovation et
création de valeur: premiers enseignements du programme MINE France. Cigref, Cahier No 2
(2004)
12. Delavaux, J-P.: COBIT: La Gouvernance des TI et les processus – ANDSI. Association
Nationale des Directeurs de Systèmes d’Information, France (2007)
13. Deming, W.E.: Out of the Crisis. MIT Press, Cambridge (1986)
14. Feigenbaum, V.: Total Quality Control, 1st edn. McGraw-Hill, London (1951)
15. Florescu, V., Anica-Popa, L ., Anica-Popa, I.: Governance of Information System and Audit.
BCAA (2007)
16. Florescu, V., Dumitru, V.: Problematique De La Gouvernance Du Systeme D’information.
Ann. Univ. Oradea Econ. Sci. Ser. 17(4), 1381–1386 (2008)
17. Gerndorf, K.: A process view of organisations: procedural analysis. TUTWPE No 143 (2006)
18. Goldman, S., Nagel, R., Preiss, K.: Agile competitors and virtual organizations. Van Nostrand
Reinhold Publishing, New York (1995)
19. Henderson J.C., Venkatraman N.: Strategic Alignment: A Model for Organizational
Transforming via Information Technology. Oxford University Press, New York (1993)
20. Hitt, M., Keats, B., DeMarie, S.: Navigating in the new competitive landscape: Building
strategic flexibility and competitive advantage in the 21st century-. Acad. Manag. Exec. 12(4),
22–43 (1998)
21. Johnson, J.L., Lee, R.P., Saini, A., Grohmann, B.: Market-focused strategic flexibility:
Conceptual advances and an integrative model. J. Acad. Mark. Sci. 31, 74–89 (2003)
22. Juran, J.M.: La qualité dans les services. AFNOR Gestion, Paris (1987)
23. Kidd, P.T.: Agile Manufacturing: Forging New Frontiers. Addison-Wesley, Wokingham
(1994)
24. Lei, D., Hitt, M.A., Goldhar, J.D.: Advanced manufacturing technology: organizational design
and strategic flexibility. Organ. Stud. 17, 501–523 (1996)
25. Martinet, A.-C.: Gouvernance et management stratégique., avr2008. Revue Française de
Gestion, avr2008 3(183), 95–110 (2008)
26. Morley, C., Bia, M., Gillette, Y.: Processus métiers et SI. Gouvernance, management et
modélisation, 3rd edn. Management des Systèmes d’Information, Dunod (2011)
27. Nadkarni, S. Herrmann, P.: Ceo personality, strategic flexibility, and firm performance: the
case of the indian business process outsourcing industry. Acad. Manag. J. 53(5), 1050–1073
(2010)
28. Pavlou, P.A., El Sawy, O.: From IT leveraging competence to competitive advantage in
turbulent environments: the case of new product development. Inf. Syst. Res. 17(3), 198–227
(2006)
IS Governance, Agility and Strategic Flexibility … 143

29. Raschke, R.L.: Process-based view of agility: the value contribution of IT and the effects on
process outcomes. Int. J. Account. Inf. Syst. 11(4), 297–313 (2010)
30. Reed, R., Lemak, D.J., Mero, N.P.: Total quality management and sustainable competitive
advantage. J. Qual. Manag. 5, 5–26 (2000)
31. Sharifi, H., Zhang, Z.: Agile manufacturing in practice: application of a methodology. Int.
J. Oper. Prod. Manag. 21(5-6), 772–794 (2001)
32. Silva, L., Hirschheim, R.: Fighting against windmills: strategic information systems and
organizational deep structures. MIS. Quart. 31(2), June 2007 (2006)
33. Smith, H., Neal, D., Ferrara L., Hayden, F.: The Emergence of Business Process Management,
CSC’S Research Services (2002)
34. Tao, Y., Zhu, G., Xu, Z., Liu B.: A research on bpm system based on process knowledge. In:
IEEE Conference on Cybernetics and Intelligent Systems, pp. 69–75, 21–24 Sept. 2008
35. Wilkin, C.L., Chenhall, R.H.: A review of IT governance: a taxonomy to inform accounting
information systems. J. Inf. Syst. 24(2), 107–146 Fall (2010)
36. Yin, R.K.: Applications of Case Study Research. Sage, Thousand Oaks (2003)
37. Yin, R.K.: Case Study Research Design and Methods. In: Applied social research methods
series. Sage (2009)
38. Zakuan, N.M., Yusof, S.M., Laosirihongthong, T.: Reflective review of relationship between
total quality management and organizational performance. In: 4th IEEE International
Conference on Management of Innovation and Technology, ICMIT 2008, p. 444, 21–24
Sept. 2008
Part II
ICT and Knowledge Management
Information, Technology, and Trust:
A Cognitive Approach to Digital Natives
and Digital Immigrants Studies

Francesca Marzo and Alessio Maria Braccini

Abstract This paper presents the design of an experiment to investigate digital


natives and digital immigrants trust and control behavior in teams. The paper
discusses the theoretical background for both digital natives, and trust. The design
of the experiment is motivated by the fact that in order to improve digital natives
studies empirical investigations are required by the literature. The experiment
designed is a formalization of a one-shot modified trust game in which both trust
and control dynamics between two players can be observed. The idea is to use data
gathered through several experiments executions to investigate potential differences
in trust and control dynamics in homogeneous groups (all composed by digital
natives trustors and trustees or digital immigrants trustors and trustees) and heter-
ogeneous groups (all composed by a digital native trustor non digital native trustee
or the opposite).

1 Introduction

Some studies suggest that the intense use of information and communication
technologies (ICTs) in the early years of a person’s life could contribute to the
development of peculiar behavioral habits and cognitive structures [7, 31, 35, 36,
44]. Such circumstance is usually linked with the consideration of the existence of a
group of individuals who had the chance to heavily interact with ICTs since the
early stages of their lives since they were born in a world permeated by these
technologies. Those who had this possibility are called by the literature with

F. Marzo (&)
LUISS Guido Carli University, Rome, Italy
e-mail: fmarzo@luiss.it
A.M. Braccini
Dipartimento di Economia e Impresa (DEIm), Università degli Studi della Tuscia,
Viterbo, Italy
e-mail: abraccini@unitus.it

© Springer International Publishing Switzerland 2016 147


C. Rossignoli et al. (eds.), Organizational Innovation and Change,
Lecture Notes in Information Systems and Organisation 13,
DOI 10.1007/978-3-319-22921-8_12
148 F. Marzo and A.M. Braccini

different names [49]. Tapscott first [44] describes a net generation as the cohort of
individuals who grew up in a digitalized world. Prensky uses the term digital
natives [35, 36] to indicate those that were born in such world, and calling digital
immigrants who encountered ICTs later, after the birth. McMahon and Pospisil [31]
instead describe Howe and Strauss’ millennials [24] as individuals used to interact
with technologies.
Besides the differences in the terms used by the authors, there appears to be a
common ground among these profiles [6–8]: the frequent and intense interaction
with the ICTs that these individuals had [49]. As reported by Vodanovich et al.
[46], in their life the natives have on average spent about 20,000 h online using
different kinds of transactional systems and decision support systems to collect
information, to establish social relationships, to have fun, or to cooperate with
others.
The topic of digital natives attracted large interest in the literature but recently a
wave of more critical studies are challenging some of the assumptions over which
the concept of digital native lays [7, 49]. Following several conceptual works,
empirical evidences contributed to identify a great internal variance in the char-
acteristics of this generation [2], and further empirical investigations are necessary
[23].
Given that technology influences organizational norms, values, and behavior
[33, 47], investigating the digital natives phenomenon is important both from the
perspective of information systems [46] and from the perspective of organizational
behavior [6]. In this paper we propose to study these individuals (that from this
point on we will simply call digital natives) from a cognitive approach to shed light
on the trust and control dynamics that underpin digital natives cooperation behavior
in teams. To this regard in this paper we motivate and design an experiment-based
research strategy. The paper is structured as follows. In Sect. 2 we will discuss the
literature on digital natives, and in Sect. 3 we will describe the literature on trust and
control. The designed experiment research strategy will be described in Sect. 4, and
discussed in Sect. 5. Section 6 will conclude the paper with some final remarks.

2 ICT Usage Intensity in Digital Natives

The debate on digital natives is centered on the pivotal assumption that the abun-
dant presence of ICT in a person’s life, since the birth, might have allowed them to
develop peculiar behavioral skills, habits, and norms, both in relation with the way
they use the technology, and with the way they interact with other individuals and
cooperate [46, 49]. The literature describes a set of traits that qualify these natives
that is not always consistent.
A first aspect to be considered is the problem of the age. Consistent with the
understanding of a cohort as a group of individuals who share the same chrono-
logical traits [13, 38], the literature frequently identifies digital natives only
according to their birthdate: whoever is born after a specific year is a digital native
Information, Technology, and Trust: A Cognitive Approach … 149

by definition, and he is supposed to show their purported characteristics. Given the


abundant diffusion of ICT technologies by the last decade of the XX century, many
sources resort on the periods of the 80 and 90s [26, 28, 35, 49] to identify the
birthdate of a digital native. Such approach leaded to non-consistent research
results. As reported by Bennet et al. [2] these cohorts show a great internal variation
in their characteristics. Some critical studies who questioned the very concept of the
existence of a digital generation [10, 28], probably indicate that the age is a mis-
leading classificatory trait. This is also due to the fact that several factors (like
census, education level, technological level etc.) [7] contribute to alter the inter-
action capabilities with ICT [42], and therefore to mediate the development of the
connected skills and behavioral traits.
Being the chronological trait of no help to discriminate the different character-
istics of the digital natives, we could conceptualize a cohort as the set of individuals
who share common significant life events [13, 38]. Under this perspective what the
literature postulates is a two-fold condition: (i) digital natives were born and grew in
a world where advanced technologies were profusely available, and (ii) digital
natives had the chance to interact with these technologies since their birth.
The first aspect is many times mentioned in the literature discussing digital
natives. These studies report that the digital natives individuals cannot conceive a
world without a sophisticated technology [12, 26, 39, 48, 49]. The second aspect is
as well frequently mentioned [25, 35, 36, 39], though sometime it is treated as an
assumption, and not explicitly investigated.
Where there seems to be an ample differentiation among the characteristics of the
digital natives is on the purported behavioral traits commonly assigned to these
individuals. This is important for the aim of our research project as this paper
frames itself within the boundaries of the study on digital natives based on their
behavioral traits. For our research project is therefore not so important if a gener-
ation of individuals who had access to technology early in their lives could have
developed a deep ICT knowledge, but if the intensity of the interaction they had
with such technology could alter the organizational behavior of a person. This
hypothesis is supported by the literature which reports that the digital natives are
used to entertain social relationships through ICT tools [22, 28, 32, 35], or to prefer
peer-to-peer relationships with colleagues rather than hierarchical ones [7]. So it is
worthwhile to look for potential consequences on digital natives’ behavior in teams
and in organizations.
A further aspect that is also discussed by the literature is that the frequent use of
technology could impact digital generations’ motivation and capability to act
proactively in organisational settings [45]. As reported by some sources [11, 41]
digital natives are accustomed to the habit of receiving immediate feedback for their
actions. This is a direct consequence of the way ICT works, where usually the
interaction with such tools is a set of re-iterated sequences of actions by the user and
immediate reactions by the ICT tool itself [34]. Considering such behavioural trait
of digital natives, consequences might be expected when no immediate feedback is
available for their activities, especially in all the situations where delayed moti-
vation is instead necessary [6, 21].
150 F. Marzo and A.M. Braccini

Adding to this the habit of interacting with ICT tools could also have produced
in them the need to be in control of the situations they find themselves involved in
[41]. When interacting with the ICT the user is usually in the control of the software
system being used. Such habit and capability of controlling ICT systems is sup-
posed to have left digital natives with the need to be able to control the outer
environment. At the same time being able to control such complex software sys-
tems also induced in digital natives a sense of self confidence [39] that might also
go beyond the technological aspects. In some cases this self confidence becomes a
sense of trust [49] that influences both their relation with technology and people.
Such mixed and sometimes conflicting set of behavioural traits calls for an
empirical investigation of how digital natives actually behave in teams and or-
ganisations. In particular, our work intends to get insights into the balance/conflict
between the sense of control and the sense of trust of digital natives, to see if
interaction with technology is significant for the explanation of a purported different
behaviour. This conflicting dichotomy can be addresses by a study using the
implementation of an empirical investigation based on a rich model of trust, which
has been addressed in several cognitive studies [17, 20, 37] and take advantage of a
quite complete cognitive analysis [16]. Moreover, it seems that such an insight can
be a very promising starting point for empirical studies of digital natives, being their
cognition under investigation.

3 Trust and Control for Organizational Studies

Although the concept of trust cannot be used as a factor discriminating between


digital natives and digital immigrants, as pointed out in previous paragraph, digital
natives are reported to have developed a need of controlling environments which
they interact with and within. Furthermore, the same ability of controlling can
induce them to increase their confidence both in how they use different technologies
and in how they interact with other people. Both in the activity of being in control
of something/somebody, and in feeling confident on something, the concept of trust
is crucial. In order to better understand these aspects we claim that is important to
understand their predisposition to trust. In other terms, we believe it is important to
investigate if there are differences between trust attitudes and trust behaviour in
digital native and digital immigrants. In order to formulate a specific research
question we first introduce the theoretical grassroots of the concept of trust, and its
relationship with the concept of control.
Trust can be considered as a layered notion that has been used in different fields
to refer to different phenomena [14]. In organization study trust has been mainly
addressed by relating it either to organizational performance and functioning [18]
and to behavioral aspects [30]. In both cases a crucial role is played by some
fundamental analytic presuppositions of trust: risk, uncertainty and ambiguity [19].
In fact, as pointed out in several works both in psychological studies and in
Information, Technology, and Trust: A Cognitive Approach … 151

behavioral economics field, to trust means to accept some risk and count on some
other agent or process [4]. For these assumptions trust has been defined as “the
willingness of a party to be vulnerable to the actions of another party based on the
expectation that the other party will perform a particular action important to the
trustor, irrespective of the ability to monitor or control that other party” [30].
Although this definition is able to catch a very crucial point of the decision and
action of trust, some important psychological aspects are missing. In order to
include them into the concept and, then, to correctly model organizational trust, we
need to integrate (i) some considerations about what the trustor believes about
trustee’s internal attitudes, and (ii) a measure of subjective propensity of the trustor
to accept uncertainty, risk and ambiguity [1, 16]. These aspects, deeply dependent
both on context and on subjective and cultural diversity, represent the core point for
the present work. Their crucial features will be clearer once the cognitive model of
trust and control will be presented.

3.1 Socio-cognitive Model of Trust

In order to get a comprehensive model of trust, the concept must be understood in


its dispositional aspect (by disentangling the set of beliefs and evaluations sup-
porting the expectation about other’s behavior), and in its behavioral consequences
(that strictly depend on the level of trust experienced in the situation). Moreover
both trust as a mental attitude and trust as a decision are intrinsically situated and
must be inevitably tied to the context in which the interaction occurs [29].
The core dimension of trust, on which the decision to delegate a task depends, is
of course its epistemic component: the quantified belief (more precisely the
expectation) that the trustee will act in an appropriate and successful manner. To
cope with this crucial aspect a cognitive model has been developed, based on the
socio-cognitive theory of trust built on a portrait of the mental state of trust in
cognitive terms (beliefs, goals) and has been proposed to account for these different
dimensions [16]. By adopting a cognitive model of agency also the issues identified
above can be tackled. This model represents the most explicit (reason-based) and
conscious form of trust in which the cognitive factors affecting trust are also used to
make trust decisions.
In terms of disposition, as previously underlined, we need to take into account
belief about the vulnerable position the trustee put herself on. Indeed, any act of
trusting and relying implies some bet and some risk [27]. Let’s for example take
two individuals, Eliza and Nick, and the trust relationship between the former and
the latter. Eliza might eventually be disappointed, deceived and betrayed by Nick:
Eliza’s beliefs may be wrong. At the same time Eliza bets something on Nick. First,
Eliza renounced to possible alternatives (i.e. other partners) and she might have lost
her opportunity: thus she is risking on Nick the utility of her goal (and of her whole
plan). Second, Eliza had some cost in evaluating Nick, in waiting for his actions,
and she wasted her own time and resources. Third, perhaps Eliza had some cost to
152 F. Marzo and A.M. Braccini

induce Nick to do what she wants or to have him at her disposal: Eliza can paid for
Nick’s service and this investment is a real bet on him [16]. Thus, we can say that
when Eliza trusts Nick there are two risks: (a) the risk of failure, the frustration of
her goal or of the entire plan, and (b) the risk of wasting her efforts and investments.
Therefore, the act of trusting/reliance is a risky activity: it presupposes some
uncertainty and it requires some predictability of trustor’s behavior.
This subjective perception of risk and this degree of trust can either be due to
lack of knowledge, incomplete information, dynamic world, or to favorable and
adverse probabilities.
When applied to a cognitive, intentional agent, the disposition belief must be
supported by other beliefs: (1) willingness belief (Eliza believes that Nick has
decided and intends to do the action required by her—trust requires modeling the
mind of the other) and (2) persistence belief (Eliza should also believe that Nick is
stable enough in its intentions, that has no serious conflicts about or that he is not
unpredictable, otherwise she might change her mind).

3.2 Attribution of Trust and the Concept of Control

Trust can imply (either implicitly or explicitly) the subjective probability of the
successful performance of a given behavior. It is on the basis of this subjective
evaluation of risk that someone decides to rely on someone else. However, the final
probability of the realization of related goals should be decomposed into the prob-
ability of the trustee to perform the required action, that derives from the probability
of internal attribution (such as willingness, persistence, engagement, competence)
and the probability of having the appropriate conditions (external attribution,
including absence of interferences) [16].
Environmental and situational trust [15] are aspects of the external trust. Is it
important to stress that when the environment and the specific circumstances are
safe and reliable, less trust is necessary for delegation. Conversely, the stronger is
the trust relationship, the smaller is the need of a safe and reliable environment and,
then, of external monitoring and authority. Therefore, we can account for a
‘complementarity’ between internal and external components of trust. However,
when trust is not there, there is something that can replace it (i.e. surveillance,
contracts etc.). It is just matter of different kinds or better facets of trust. From this
perspective an important role is played by control.
The control can be considered as a meta-action aimed both at ascertaining
whether another action has been successfully executed or if a given state of the
world has been realized or maintained (feedback, checking) and at dealing with the
possible deviations and unforeseen events in order to positively cope with them
(intervention).
Information, Technology, and Trust: A Cognitive Approach … 153

A perspective of duality between trust and control is very frequent and at least
partially valid [15]. Control and normative remedies have been described as weak,
impersonal substitutes for trust, or as functionally equivalent mechanisms, since to
reach a minimum level of confidence in cooperation, partners can use trust and
control to complement each other [40]. From the socio-cognitive perspective on
trust, control is seen as antagonistic to strict trust (considered as just internal
attribution): if there is trust there is no need for control. Instead, when we consider
the broad form of trust, that include both internal and external attribution, we can
say that control can contribute to create and increase trust, as well as it can complete
and complement it [16].

4 Design of a Research Project

Building over this literature our research project aims at investigating if the trust and
control predisposition of digital native significantly differs from those of the immi-
grants. A possible way to study the willingness to trust is to understand how people
act when the possibility of controlling others’ actions is represented by the possibility
of punishing them [9]. Another process that can easily increase trust pre-disposition is
the possibility of introducing some form of insurance, so that the loss deriving from
betrayal is significantly reduced [5]. We aim at testing both mechanisms in order to
find out potential patterns in digital natives needs of control, on one side, and to
discover possible differences in which form of control they prefer to use. In other
terms, we posit the existence of a relationships between two individuals, I and R, and
we posit that such relationship involves trust and control dynamics among them. We
therefore aim at answering the following research question:

RQ: Is it the behavior of digital natives significantly different from that of digital
immigrant?

In order to answer this question, we approach the investigation of digital natives


and digital immigrants “trust and control” behavior through an experimental
strategy. To answer the research question we will test the following intermediate
research propositions:

P1: How digital natives and digital immigrants act when they are offered the
possibility to control the action of the other individual trough a form of
punishment.
P2: How digital natives and digital immigrants act when they are offered the
possibility of controlling the action of the other individual trough a form of
insurance.
P3: The existence of differences in the preferences of forms of controlling between
digital natives and digital immigrants.
154 F. Marzo and A.M. Braccini

The experiments we will run consist in a modified trust game [3]. In the trust
game two different players are involved:
• I: The investor;
• R: The recipient.
In the trust game the investor I is endowed with a sum of money, which she can
keep or invest with the recipient R. The decision to invest implies the existence of
trust between I and R. The amount I decides to invest is tripled and sent to the
trustee R who then decides what fraction to return to the investor. Both players have
different strategies to execute and associated to each combination of strategies there
are different payoffs p. In our experiment the first mover I has the possibility to
invest a sum of money 1 executing one of the following two strategies
sn 2 S ¼ ðis1 ; is2 Þ, where:
• is1 → I trusts R (I decides to invest);
• is2 → I does not trust R (I decides not to invest).
If the strategy is1 is executed by the first mover I, the second mover R has the
possibility to execute two subsequent strategies tn 2 T ¼ ðrt1 ; rt2 Þ, where:
• rt1 → R trusts I;
• rt2 → R does not trust
8 
>
> pi ðis1 ; rt1 Þ ¼ b
< rt1 !
is1 !
p
 r ðis1 ; rt1 Þ ¼ b
>
> pi ðis1 ; rt2 Þ ¼ l
: rt2 ! ð1Þ
 p r ðis1 ; rt2 Þ ¼ h
pi ðis2 Þ ¼ m
is2 !
pr ðis2 Þ ¼ m

If R executes rt1 then the amount of money triples ð1 ¼ 31Þ. Following the
choices of the two players the set of the subsequent payoffs p (h highest, b better, m
moderate, and l lowest) (are assigned to each player according to the combination
of strategies executed (see Eq. 1). In all these scenarios we posit the following
conditions.

h [ b [ m [ l havingðh þ 1Þ ¼ ðb þ bÞ ð2Þ

The modification of the basic game consist in the fact that the first mover’s
expected value from trusting can be affected by decreasing ðh  h Þ the highest
payoff the counterpart receives if he is a betrayer, and/or by increasing ðl ¼ lþ Þ the
lowest payoff she receives if her trust is betrayed. The former consists in the case of
a punishment (that can be referred to as “securing revenge”), the latter consists in
the case of an insurance (which we can consider as “securing protection”). Such
Information, Technology, and Trust: A Cognitive Approach … 155

Fig. 1 Tree diagram of the experiment

choice shall be taken at the beginning of the game. The different alternatives along
with the payoffs of the game are graphically summarized in the tree diagram shown
in Fig. 1.
The subjects I and R involved in the experiments can be of different kinds
k 2 K ¼ ðnat; natÞ, where knat indicates a digital native, and knat indicates a non
digital native. The experiment will consist of several rounds of the game involving
a mix of different subjects to cover all the four possible combinations described
below.

A: Inat Rnat
B: Inat Rnat
ð3Þ
C: Inat Rnat
D: Inat Rnat

We will profile natives and non natives prior the participation to the experiment
through the usage of a measurement scale [7] and a basic computer skill test. We
will run several experiments with different groups of participants to ensure we have
an equal number of observations for each of the previously described four com-
binations. The final set of empirical evidences shall contain at least 300 observa-
tions for each group of participants. To increase the relevance of the study, as well
as its validity, experiments execution will be aimed at collecting evidences
achieving heterogeneity across the following dimensions: age, degree, and census.
Furthermore data will be collected in an international context, including in the
experiment subjects from countries different from Italy, to include in the analysis
156 F. Marzo and A.M. Braccini

the factors related to the technology level and technology regulation. At the end it
will be possible to observe inter group differences in the following total and average
payoffs, again for each of the four possible combinations of subjects.
P
Pik ¼ P ðpik ðis1 ; rt1 Þ þ pik ðis2 ; rt2 Þ þ pik ðis2 ÞÞ
Prk ¼ ðprk ðP
is1 ; rt1 Þ þ prk ðis2 ; rt2 Þ þ prk ðis2 ÞÞ
ðpik ðis1 ;rt1 Þþpik ðis2 ;rt2 Þþpik ðis2 ÞÞ ð4Þ
lðPik Þ ¼ P n
ðprk ðis1 ;rt1 Þþprk ðis2 ;rt2 Þþprk ðis2 ÞÞ
lðPrk Þ ¼ n

An analysis of variance (ANOVA) will allow us to test for significant differences


in the averages of the different samples, having the null hypothesis signifying the
averages are equal, and thus that there are no behavioral differences among the
different combinations of groups.

5 Discussion of the Approach

The experimental research strategy is motivated on the basis of the following


considerations. First of all, the survey of the literature highlighted the inade-
quateness of the birthdate to identify the digital natives as a cohort. This aspect
actually makes the identification of a population difficult and poses threats to the
adoption of any research methodology that involves the definition of a sample
(excluding the incidental one). A second aspect concerns the potential bias that can
be caught by self-reported measures, especially when they are used to assess
behavioral traits. Finally to investigate trust and control dynamics of digital natives
it would have been necessary to identify an empirical setting where these behavioral
traits could be observed detached from other contextual factors. This would be
necessary to exclude potential confounds (i.e. previous acquaintance and/or
closeness of the ties between the trustor and the trustee, previous experiences of
them in the same workgroup and similar) that might influence trust and control
related decisions.
For all these considerations we reputed the experiment based research strategy
viable since it would allowed us to resort on an incidental sample, to avoid bias of
self reported measure, and at the same time to retain control on contextual factors
that might act as confounds in the experiment design [43]. Specifically regarding
this last aspect, we would ensure, through anonymous and random team member
selections and team formation, to avoid bias from contextual factors. The execution
of the experiments with four different groups (see Eq. 3) would allow us both to
investigate potential differences between teams formed by natives and teams
formed by non natives, and between homogeneous and heterogeneous teams (i.e.
teams composed both by natives and immigrants), investigating all the potential
scenarios that we expect to find in real life situations.
Information, Technology, and Trust: A Cognitive Approach … 157

6 Conclusion

In this paper we have motivated, presented, and discussed a research project for an
empirical investigation of digital natives behavioral traits, specifically referring to
trust and control dynamics, using a cognitive theoretical background. The aim of the
paper is to design an experiment based empirical study that might provide an insight
into psychological aspects whose dynamics might influence individuals’ behavior in
teams. The study described in this paper is framed in a wider research project that
will involve a cross-methodological approach mixing qualitative and quantitative
analysis, on one side, and experimental and on-field data collection on the other. We
posed the bases of this ambitious path in this paper, in which we presented the design
of and experiment-based research strategy to study trust and control of digital
natives. This experiment, based on a modified version of the trust game, is aimed at
working as a first step in this research program. After data collection and analysis
further investigations will allow for deeper studies on what kind of differences exist
and how their dynamics work (i.e. how to possibly manipulate these dynamics to
enhance team cooperation when digital natives are involved).

References

1. Basaglia, S. et al.: Team level antecedens of individual usage of a new technology. In:
Proceedings of the 16th European Conference on Information Systems. Galway, Ireland
(2008)
2. Bennett, S., et al.: The “digital natives” debate: a critical review of the evidence. Br. J. Educ.
Technol. 39(5), 775–786 (2008)
3. Berg, J., et al.: Trust, reciprocity, and social history. Games Econ. Behav. 10(1), 122–142
(1995)
4. Bohnet, I., Zeckhauser, R.: Trust, risk and betrayal. J. Econ. Behav. Organ. 55(4), 467–484
(2004)
5. Bohnet, I., et al.: The elasticity of trust: how to promote trust in the Arab Middle East and the
United States. In: Kramer, R.M., Pittinsky, T.L. (eds.) Restoring Trust in Organizations and
Leaders: Enduring Challenges and Emerging Answers. Oxford University Press, Oxford
(2012)
6. Braccini, A.M.: Does ICT influence organizational behaviour? An investigation of digital
natives leadership potential. In: Spagnoletti, P. (ed.) Organization change and Information
Systems—Working and Living Together In New Ways, pp. 11–19. Springer, Berlin (2013)
7. Braccini, A.M., Federici, T.: A measurement model for investigating digital natives and their
organisational behaviour. In: Proceedings of the 2013 International Conference on Information
Systems (ICIS 2013). Milano (2013)
8. Braccini, A.M., Federici, T.: Investigating digital natives and their organizational behavior: a
measurement model. In: Visintin, F., et al. (eds.) Organising for Growth: Theories and
Practices. CreateSpace Independent Publishing Platform, Udine (2014)
9. Brandts, J., Rivas, F.M.: On punishment and well-being. J. Econ. Behav. Organ. 72(3), 823–
834 (2009)
10. Brown, C., Czerniewicz, L.: Debunking the “digital native”: beyond digital apartheid, towards
digital democracy. J. Comput. Assist. Learn. 26(5), 357–369 (2010)
158 F. Marzo and A.M. Braccini

11. Cahill, T.F., Sedrak, M.: Leading a multigenerational workforce : strategies for attracting and
retaining millennials. Front. Health Serv. Manag. 29(1), 3–16 (2011)
12. Carillo, K. et al.: An investigation of the role of dependency in predicting continuance
intention to use ubiquitous media systems: combining a media system perspective with
expectation-confirmation theories. In: Proceedings of the European Conference on Information
Systems (ECIS). Tel Aviv, Israel (2014)
13. Carlsson, G., Karlsson, K.: Age, cohorts and the generation of generations. Am. Sociol. Rev.
35, 710–718 (1970)
14. Castaldo, S., et al.: The meaning(s) of trust. a content analysis on the diverse
conceptualizations of trust in scholarly research on business relationships. J. Bus. Ethics 96
(4), 657–668 (2010)
15. Castelfranchi, C.: The role of trust and deception in virtual societies. In: Proceedings of the
34th Annual Hawaii International Conference on System Sciences, p. 8 IEEE Comput. Soc.
(2001)
16. Castelfranchi, C., Falcone, R.: Trust Theory: a Socio-Cognitive and Computational Model.
Wiley, Chichester (2010)
17. Castelfranchi, C., Falcone, R.: Social trust: a cognitive approach. In: Castelfranchi, C., Tan,
Y.-H. (eds.) Trust and Deception in Virtual Societies, pp. 55–90. Academic Publishers,
Kluwer (2001)
18. Cummings, L.L., Bromiley, P.: The organizational trust inventory. Trust in organizations:
Frontiers of theory and research, pp. 302–330. SAGE Publications Inc., Thousands Oaks
(1996)
19. Das, T.K., Teng, B.-S.: The risk-based view of trust: a conceptual framework. J. Bus. Psychol.
19(1), 85–116 (2004)
20. Finin, T. et al.: Information agents: the social nature of information and the role of trust. In:
Klusch, M., Zambonelli, F. (eds.) Cooperative Information Agent V, pp. 208–210 .Springer
(2001)
21. Goleman, D.: What makes a leader? Harv. Bus. Rev. 82(1), 82–91 (2004)
22. Hargittai, E., Hinnant, A.: Digital inequality—differences in young adults’ use of the internet.
Commun. Res. 35(5), 602–621 (2008)
23. Helsper, E.J., Enyon, R.: Digital natives: where is the evidence? Br. Educ. Res. J. 36(3), 503–
520 (2010)
24. Howe, N., Strauss, W.: Millennials Rising: the Next Great Generation. Vintage, New York
(2000)
25. Keif, M., Donegan, L.: Recruiting Gen X and millennial employees to grow your business.
2006 forecast. Technol. Trends Tactics 18(1), 89–92 (2006)
26. Kupperschmidt, B.R.: Understanding net generation employees. J. Nurs. Adm. 31(12), 570–
574 (2001)
27. Luhmann, N.: familiarity, confidence, trust: problems and alternatives. In: Gambetta, D. (ed.)
Trust: Making and Breaking Cooperative Relations, electronic edn, pp. 94–107. Blackwell
Publishers Ltd, Oxford (2000)
28. Margaryan, A., et al.: Are digital natives a myth or reality? University students’ use of digital
technologies. Comput. Educ. 56(2), 429–440 (2011)
29. Marzo, F., Castelfranchi, C.: Trust as individual asset in a network: a cognitive analysis. In:
Spagnoletti, P. (ed.) Organization Change and Information Systems, LNISO, vol. 2, pp. 167–
175. Springer, Heidelberg (2013)
30. Mayer, R.C. et al.: An Integrative Model of Organizational Trust. Acad. Manag. Rev. 20(3),
709 (1995)
31. McMahon, M., Pospisil, R.: laptops for a digital lifestyle: millennial students and wireless
mobile technologies. In: Proceedings of Ascilite Conference, pp. 421–431. Brisbane (2005)
32. Oblinger, D.G., Oblinger, J.L.: Is it age or IT: first steps toward understanding the net
generation. In: Oblinger, D.G., Oblinger, J.L. (eds.) Educating the Net Generation, pp. 2.1–
2.20, North Carolina State University (2005)
Information, Technology, and Trust: A Cognitive Approach … 159

33. Orlikowski, W.J., Robey, D.: Information technology and the structuring of organizations. Inf.
Syst. Res. 2(2), 143–169 (1992)
34. Pennarola, F., Caporarello, L.: Enhanced class replay: will this turn into better learning? In:
Wankel, C., Blessinger, P. (eds.) Increasing Student Engagement and Retention Using
Classroom Technologies: Classroom Response Systems and Mediated Discourse
Technologies, pp. 143–162. Emerald Group Publishing Ltd (2013)
35. Prensky, M.: Digital natives. Digital Immigr. Horiz. 9(5), 1–6 (2001)
36. Prensky, M.: Digital natives, digital immigrants, part II: do they really think differently?
Horizon 9(6), 1–9 (2001)
37. Falcone, R. et al.: A fuzzy approach to a belief-based trust computation. In: In: Falcone, R.,
Singhr, M., Tan, Y.H. (eds.) Trust, Reputation, and Security: Theories and Practice, pp. 73–86.
Springer, Heidelberg (2003)
38. Rhodes, S.: Age-related differences in work attitudes and behavior: a review and conceptual
analysis. Psychol. Bull. 93, 328–367 (1983)
39. Schewe, C.D., et al.: “If you’ve seen one, you’ve seen them all!” are young millennials the
same worldwide? J. Int. Consum. Mark. 25(1), 3–15 (2013)
40. Sitkin, S.B., Roth, N.L.: Explaining the limited effectiveness of legalistic “remedies” for
trust/distrust. Organ. Sci. 4(3), 367–392 (1993)
41. Smith, K.T.: Work-life balance perspectives of marketing professionals in generation Y. Serv.
Mark. Q. 31(4), 434–447 (2010)
42. Sorrentino, M., Niehaves, B.: Intermediaries in e-inclusion: a literature review. In: Proceedings
of the 43rd Hawaii International Conference on Information Systems (HICSS) (2010)
43. Spagnoletti, P. et al.: Exploring foundations for using simulations in IS research. In:
Proceedings of the 34th International Conference on Information Systems, pp. 1–15. Milan
(2013)
44. Tapscott, D.: Growing up Digital: The Rise of the Net Generation. McGraw-Hill, New York
(1998)
45. Vitari, C., Piccoli, G., Mola, L., Rossignoli, C.: Antecedents of IT dynamic capabilities in the
context of digital data genesis. In: Proceedings of the 20th European Conference on
Information Systems. Barcelone, Spain (2012)
46. Vodanovich, S., et al.: Digital natives and ubiquitous information systems. Inf. Syst. Res. 21
(4), 711–723 (2010)
47. Vom Brocke, J. et al.: Value assessment of enterprise content management systems: a
process-oriented approach. In: D’Atri, A. and Saccà, D. (eds.) Information Systems: People,
Organizations, Institutions, and Technologies, pp. 131–138. Physica-Verlag HD, Heidelberg
(2010)
48. Yadin, A.: Millennials and privacy in the information age: can they coexist ? IEEE Technol.
Soc. Mag. 31(4), 32–38 (2012)
49. Zimerman, M.: Digital natives, searching behavior and the library. New Libr. World. 113(3-4),
174–201 (2012)
When Teachers Support Students
in Technology Mediated Learning

Leonardo Caporarello, Massimo Magni and Ferdinando Pennarola

Abstract This paper focuses on information technology adoption and use within
the education sector. We have analyzed the impact on learning effectiveness of
technology mediated learning environments, namely characterized by the adoption
of tablet based technologies, as a revolutionary complement to traditional
teaching/learning techniques. Our research analyzes the effect of “Support
Activities” on grades. “Support Activities” are defined in this paper as the set of
constructs like “Teachers’ Encouragement”, “Classmates’ Encouragement” and
“Technical Support Availability”. Grades are used as a measure of learning effec-
tiveness. A sample of 370 students participated in our study, being attendants of
experimental classes using tablets as ordinary working tool to access to digital
resources. Our mainstream theory reference was built on the theoretical foundations
of Technology Acceptance Model, by comparing the perceived effect of those
constructs between grade ranges. Finally, the experimental sample was compared to
classes where the same teachers used traditional learning resources. The aim of this
work is to give a practical understanding of support factors influencing
tablet-mediated learning effectiveness. In particular, our findings show the differ-
ences between scientific and humanistic subjects. Our research confirms that
technology alone does not revolutionize teaching and learning; nonetheless, it
contributes to an improved experience if support initiatives are deployed.

Keywords Tablet technologies  Technology mediated learning  Learning


effectiveness

Authors are grateful to Impara Digitale, the non profit association that authored the experimental
teaching and learning environment described in the paper.

L. Caporarello  M. Magni  F. Pennarola (&)


Department of Management and Technology, Bocconi University, via Roentgen 1, 20136
Milan, Italy
e-mail: ferdinando.pennarola@unibocconi.it

© Springer International Publishing Switzerland 2016 161


C. Rossignoli et al. (eds.), Organizational Innovation and Change,
Lecture Notes in Information Systems and Organisation 13,
DOI 10.1007/978-3-319-22921-8_13
162 L. Caporarello et al.

1 Introduction

The traditional innovation model, in computer-related domains, has been ruled for
years by military forces, research centers and big corporations, being the exclusive
actors, pushing the technological frontier. They did so by addressing huge amounts
of R&D money to corporate customers-dedicated products and projects. If a
potential market was envisioned, and if a mass consumption adapted version could
be manufactured, the technological innovation could later diffuse among individual
consumers. There is plenty of historical evidence of these processes. For example,
the Defense Advanced Research Projects Agency (DARPA), an agency for military
technology research, built the Internet in the 60s. Similarly, the Personal Computer
is an adaptation and evolution of previously existing mainframes. More recently,
accelerometers, today commonly plugged in smartphones, were originated by
military research.
Nowadays, we are experiencing an important change of paradigm, towards a
model where the innovation direction has reversed. Today’s innovative devices
were originally born to fulfill individual consumers’ needs. This process is also
referred to as “consumerization of IT”. One firm that above all contributed to this
paradigm shift is Apple. In this work we are focusing on the educational sector,
being greatly influenced by this consumerization wave. We argue that its activities
are positioned at the intersection between a strictly personal use and a work-specific
application. In fact, schools act as the bridge to lead students to the work worlds,
and at the same time they may profit from widely adopted technologies, by
leveraging on the diffused practice of Bringing Your Own Device (BYOD). What
happens when students bring their tablets to school? Could the teaching/learning
environment be revolutionized thanks to this consumer oriented technology?
While there have been past initiatives on ICT in education, they were limited to the
introduction of digital devices and isolated competences within the learning sector.
Not enough attention was paid to integration and support actions. Devices were placed
in separated classrooms and competences were isolated in a minority of professors in
the scientific areas. As a result, until the late 90s, IT was considered a facultative—and
maybe also superfluous—extension of certain learning activities [1], while consid-
ering pivotal the attention toward students’ learning and satisfaction [2–4].
In Italy, According to “Growth 2.0” Decree (also known as “E-textbooks Law”),
starting with school year 2014–2015, all schools are supposed to adopt digital
books or at least to mix traditional sources with digital ones [5–7]. But the real point
is how to leverage on this opportunity. A pioneer number of high schools in the
country anticipated the wave, by launching in 2010 an experimental
teaching/learning project. The idea was to ask students, and their respective fami-
lies, to swap the budget dedicated to textbooks into the purchase of a tablet, either
iOS or Android platform. As of this agreement, the school will train faculty to teach
by leveraging the tablet and digital resources in the classroom, guaranteeing the
achievements of the same results, as long as all the students in the class carry their
tablet to school every day.
When Teachers Support Students in Technology Mediated Learning 163

2 The Project Relevance and Its Research Antecedents

Our research project started from an alarming skepticism: is there any risk that all of
this technology deployment will result in no or little use? Despite the advancements
in technology and the increasing investments in its adoption, the problem of
unutilized systems is serious. Studies on this trend often call it “productivity par-
adox”, as breakthroughs in information technology brought about poor productivity
growth [8–10]. This calls for a better understanding on the deployment of tech-
nology in organizations and their user acceptance. Since the early 90s, a new
literature branch has emerged based on the Technology Acceptance Model. The
structure of the data analyzed in this work was designed based upon the Technology
Acceptance Model (from now on TAM). TAM is an information systems theory
introduced by Davis in 1989, its major extensions being the TAM2 [11, 12] and the
Unified Theory of Acceptance and Use of Technology (from now on UTAUT [13]).
It has also been proposed a TAM3 [14]and a UTAUT2 [15]. The core concept of
the TAM is that there are a number of factors influencing how people react to and
therefore “accept” a new technology. In the original version of TAM those forces
are:
• Perceived Usefulness, described as the perceived job performance enhancement
due to the use of a particular system [16];
• Perceived Ease of Use, described as the perceived degree to which a person
finds using a particular system free from effort [16];
• External Variables, or “External Stimulus”, are system design features and all
external variables that may influence user’s perception of use.
This theory is widely accepted and consolidated. Many scholars have provided
empirical evidence of this model’s validity and reliability via replications and
re-examinations [17–21]. Because of its reliable foundations, this model has found
many extensions to explain the effect of other factors in technology acceptance. In
TAM2 extended model [12] found that user acceptance was significantly influenced
by both social influence and cognitive instrumental processes in mandatory settings.
Social influence is described as the set of subjective norm, voluntariness and image
(i.e. social status). Cognitive instrumental processes determining perceived use-
fulness are instead: job relevance, output quality, result demonstrability and per-
ceived ease of use. Therefore, the psychological and social components started to
gain increasing importance in such technical and technological field. In the pro-
posed unifying theory or UTAUT [13] social influence constructs were found
significant in mandatory settings only. Moreover, the determinants of intention
varied over time, with some variations from significant to not significant as expe-
rience increased. Performance expectancy, effort expectancy, social influence, and
facilitating conditions were found to be direct determinants of user acceptance and
technology use. In the work of [13], facilitating conditions are described as the
perceived support of organizational and technical infrastructure to the use of
information systems. This concept of facilitating resources, as perceived existence
164 L. Caporarello et al.

of resources and support availability, was reapplied by [22]. Social influence is


described as the degree to which consumers think other relevant people (usually
friends and family) believe in and approve that technology.
There have been many real cases where technology adoption in organizations
failed because of user resistance due to bad implementation. This suggests that the
prerequisite for a technological system’s acceptance is for organizations to develop
effective involvement actions [14]. One important and significant factor influencing
usage intentions is enjoyment [23]. Szajna [24] built on original TAM’s theoretical
foundations to design a study on intrinsic motivation for learning. In a sample of
business professionals, he found out that game-based training to use an information
system enhanced intention to use by creating favorable perceptions. Following this
line of study, [12] analyzed the role of computer playfulness (the concept of
intrinsic motivation) and computer anxiety (emotion) to determine perceived ease
of use. Results were found to be highly significant. In a study on office productivity
tools via computer-mediated learning, [25] argued that learning experience depends
on the way information is presented. If not optimal, the impact on both satisfaction
and performance resulted to be negative [25]. Furthermore, a case study on
technology-mediated learning of Adobe Photoshop at university [26] showed
effects depending on learning engagement. Video support was found to have
negative impact on learning engagement, which in turn negatively influenced
learning effectiveness and satisfaction. Hu and Hui [26] further argued that the
learning medium affects learning engagement and, in turn, performance. Interactive
support was confirmed to be a key success factor.
The UTAUT2 model [15] integrated the original UTAUT with 3 new constructs:
hedonic motivation, price value, and habit. The role of enjoyment was also stressed
by [22] for increasing behavioral intention to use. Vividness and interactivity were
shown to enhance satisfaction and interest. Varying with task complexity, also
performance and reduced mental effort were attained [25]. The user acceptance of
an information system is also a matter of expectations. Venkatesh and Goyal [10]
discovered that the disconfirmation of expectations reduced system intention to use
because users developed lack of trust. Earlier findings by [27] in a related field
stated that when unrealistically negative expectations were developed, job appli-
cants’ attraction diminished. Following this line of study, [10] supposed that,
despite potentially positive experience, users might still focus on negative aspects
of the system. In other words, it is the typical real-life situation where, even though
a system does N things, users focus on the N + 1 thing it does not do. Those
findings recommend organizations setting realistic expectations, so as to increase
the likelihood of acceptance and long-run usage [10].
Venkatesh [12] went further to test for the influence of both internal and external
dimensions of control. On the one hand, external control was represented by
facilitating conditions in the use of technology [28]. Those include technical sup-
port availability of IT staff, which is especially important in early stages, when the
impact with new technology provokes a shock to routine operations [29]. The
relevance of such support was confirmed in later empirical studies showing con-
sultant backup to positively affect control perception [30, 31]. On the other hand,
When Teachers Support Students in Technology Mediated Learning 165

internal control was approximated by computer self-efficacy, i.e. the self-assessed


ability to perform computer-based actions. Venkatesh et al. [32] found that, even
after substantial experience, internal control perception remains the main determi-
nant of a system’s perceived ease of use. Based on this, [12] argues that, despite
previous experience, ease of use will be determined by general confidence with the
computerized system. Therefore, we can expect computer anxiety having a negative
effect even with increasing experience and knowledge. The study where [14]
introduced a TAM3 model also focuses on the moderating effect of experience. In
fact, since people attitude and reactions change over time, experience is an
important factor in Information Technology and Information Systems research [33,
34]. In addition, the ultimate success of a new system or technology can be judged
only in the medium-long run [35–37]. Based on this, [14] argue that the effect of
perceived ease of use on perceived usefulness will increase with experience. This is
further based upon the concept that perceived usefulness is a function of perceived
ease of use [16]. Therefore, the role of perceived ease of use remains a crucial factor
even after significant experience with IT tools. The project analyzed in this work is
set in the education sector. While the next section is dedicated to its detailed
description, it is useful to understand what it means education mediated by tech-
nology. Technology-mediated learning is defined as a learning method where
interactions are mediated by information technologies [38]. A learning tool by itself
cannot determine students’ learning effectiveness or satisfaction. A study on PDAs
at school showed a deceiving low perceived usefulness level on the users’ part [39].
Technology adoption risks to be a cost instead of an investment if support and
ancillary activities are not considered as crucial. Casalino et al. [39] provided
empirical evidence that adopting ICT at schools just following the bandwagon
effect did not guarantee better academic results.

3 The Research Project Description and Research


Questions

The use of technology in education frameworks is a relevant subject in a number of


disciplines. In management and social sciences in general, there is a wide back-
ground on general technology adoption, but not necessarily for learning purposes.
In these instances, the usual research questions can be synthesized as follows:
“Why do people use technology in an effective and efficient way? How is tech-
nology use helping people executing assigned tasks? Does technology use
amplify—other things being equal—the effectiveness of the tasks to be com-
pleted?”. These questions have a high practical impact, because knowing the factors
of influence one could leverage them to widen technology adoption. Moreover, by
knowing the relationship between results achieved and effective and efficient use of
technology, one could help building appropriate conditions in order to amplify
quality and effectiveness of the final tasks. If the final task has to do with learning,
166 L. Caporarello et al.

the designer of the system is definitely contributing to something that has a very
high social value, i.e. better learning for young generations.
In 2010 a 2 year pilot experiment started in selected classes of one Italian high
school. The experiment rapidly spread over the country and a network of 14 par-
ticipating institutions was gathered at the beginning of the school year 2012/2013.
Each school proposed one or more of its high classes (average size of 25 students):
students were asked to buy their own tablet—as substitute of textbooks—and bring
it to school every day. Teachers were trained to restructure their teaching syllabus in
order to leverage digital resources, by accessing to (but not only) a centralized
database of certified public available sources on all subjects taught (i.e. mathe-
matics, Italian literature, history, physics, chemistry, biology, music, etc.).
A constructivist learning approach was used to design the whole learning calendar:
students were asked to learn and interact in teams and individually, supported by
their teachers. It is important to remark that in Italy the single class is a strong
organizational unit. In fact, the student group stays the same not only throughout
the day, but also over the whole school cycle (5 year term for high school grade).
Similarly, the group of teachers follows the class throughout its entire cycle.
Regular tests were held along the school year—as with traditional classes (text
based learning) in the respective institutions—and each student received grades and
feedbacks. Each school appointed a control sample, i.e. one or more class unit with
traditional teaching and learning methods, using the same faculty body of the
experimental class. This allowed for a close comparison that controls for teachers’
method and grades policy. While the resources and tools are different, the studied
contents are the same. After the data cleaning, our valid dataset has an experimental
sample of 370 students of 21 classes in 9 different high schools.
Each student participating to the study was profiled anonymously (his/her
identity was hidden with a numeric code) and filled out an entry (beginning of the
school year) and an exit questionnaire (end of the school year). Questionnaires were
built around the TAM described earlier. 13 constructs were identified and every
survey question is linked to a construct’s measurement1. Every schools’ registrar
provided the whole grade record (all the subjects learned) for each student partic-
ipating to the study.
The ultimate aim of the study was to inquire about the effectiveness of the
experimented teaching and learning methods: did the analyzed technology mediated
learning method favor a better learning compared to traditional teaching approa-
ches? Which are its direct and indirect benefits? More precisely, our research
questions are listed below:

1
They are: (1) Perceived Usefulness of technology, (2) Perceived Ease of Use, (3) Attitude:
Satisfaction, (4) Attitude: Preference, (5) Intention to use, (6) Perceived Advantage of technology,
(7) Perceived Teachers’ encouragement, (8) Perceived Classmates’ encouragement, (10) Awareness
of true technology potential, (11) Internet Access, (12) Technical Support, (13) Previous Experience
with internet and computers, (14) Self Efficacy in the use of Internet.
When Teachers Support Students in Technology Mediated Learning 167

Hp1. Students perceive technology as useful, but they do not sense a comparative
advantage in relation to books, unless they have an effective encouragement from
their teachers, that help them use the technology as a real tool for their studies.

1a. Perceived usefulness has a positive significant effect on students’ performance


in term of total grades average.
1b. Perceived advantage (meaning comparative advantage of the use of technology
vs. the use of books) has a negative significant effect on students’ performance.
1c. Perceived teachers encouragement has a positive significant effect on student’s
performance.
Hp2. Classmates’ encouragement has a positive but marginally significant effect on
students’ performance.
Hp3. Perceived advantage and satisfaction have a positive significant effect on
perceived usefulness.
Hp4. Top students do not perceive technology as useful, and they do not sense a
comparative advantage in relation to books. Teachers are still the main factor
influencing students’ performance.
4a. Perceived usefulness has no significant effect on high performing students.
4b. High performing students do not perceive a comparative advantage of tech-
nology in relation to books.
4c. Teachers’ encouragement has a positive significant effect on high performing
students’ performance.
Hp5. Low performing students perceive technology as useful, but not better than
books in comparative terms. Teachers’ encouragement has the most significant
effect on performance, and previous experience has a positive but marginally sig-
nificant effect on performance.
5a. Perceived usefulness has a positive significant effect on performance of bad
students.
5b. Perceived Advantage of technology has a negative significant effect on per-
formance of bad students.
5c. Teachers’ encouragement has a positive significant effect on bad students’
performance.
5d. Previous experience in the use of technology has a positive but marginally
significant effect on bad students’ performance.
Hp6. Students that perceive a higher teachers’ encouragement show a higher
positive and more significant effect of perceived usefulness on their performance,
than students that perceive a lower teachers’ encouragement.
Hp7. Intention to use has a positive significant effect on students’ performance.
168 L. Caporarello et al.

4 Results of the Study

To test the first hypothesis we run a regression where performance was the
dependent variable, and constructs were the independent ones. In particular we used
all of the constructs except: Intention to use, Awareness of true technology potential
and Self-Efficacy in using the computer and the Internet.
The first model runs as follows:
(1) Grades = a + b1 (Usefulness) + b2 (Ease) + b3 (Satisfaction) + b4
(Preference) + b5 (Advantage) + b6 (Teachers) + b7 (Classmates) + b8
(Internet) + b9 (Support) + b10 (Experience)
Results from the regression are shown in the table below:

Coefficients—Dependent variable: Annual grades average


Model Unstandardized Standardized coefficients t-ratio p-value
coefficients
B Std.error Beta
Constant 5.897 0.449 – 13.131 0.000
Usefulness 0.261 0.111 0.197 2.348 0.019
Ease of use −0.105 0.110 −0.061 −0.954 0.341
Satisfaction 0.063 0.097 0.055 0.650 0.516
Preference −0.156 0.117 −0.098 −1.339 0.182
Advantage −0.545 0.101 −0.316 −5.414 0.000
Teachers 0.462 0.092 0.354 5.040 0.000
Classmates 0.160 0.089 0.105 1.787 0.075
Internet 0.063 0.074 0.049 0.851 0.396
Support −0.207 0.097 −0.154 −2.141 0.033
Experience 0.110 0.077 0.080 1.423 0.156

Thus, it can be inferred that each part of the first hypothesis is confirmed:
1a. Perceived Usefulness has a positive (coeff. = 0.261) significant
(p-value = 0.019) effect on students’ performance.
1b. Perceived Advantage of Technology has a negative (coeff. = −0.545) significant
(p-value = 0.000) effect on students’ performance.
1c. Teachers Encouragement has a positive (coeff. = 0.462) significant
(p-value = 0.000) effect on students’ performance.
Also the second hypothesis is confirmed by this regression:
2. Classmates’ encouragement has a positive (coeff. = 0.160) but marginally sig-
nificant (p-value = 0.075) effect on students’ performance.
When Teachers Support Students in Technology Mediated Learning 169

Since Perceived Usefulness appears to be a fundamental variable, we run a


regression using it as dependent variable, with Ease of Use, Perceived Advantage of
technology, Satisfaction and Preference as independent variables:
Usefulness = a + b1 (Ease) + b2 (Advantage) + b3 (Satisfaction) + b4 (Preference)

Coefficients—Dependent variable: perceived usefulness


Model Unstandardized Standardized coefficients t-ratio p-value
coefficients
B Std. error Beta
Constant −0.466 0.188 – −2.479 0.014
Ease of use −0.003 0.050 −0.003 −0.065 0.948
Advantage 0.437 0.041 0.336 10.542 0.000
Satisfaction 0.561 0.036 0.642 15.734 0.000
Preference 0.050 0.046 0.041 1.088 0.277

This regression confirms the third hypothesis:


4. Perceived Advantage has a positive (coeff. = 0.437) and significant
(p-value = 0.000) effect on Perceived Usefulness, as well as Satisfaction
(coeff. = 0.561, p-value = 0.000).
In order to make a deeper analysis the sample was divided in two groups using
their annual grades average:
• Top students are those whose average is equal or greater than 7 on a scale from 1
to 10;
• Low performing students are those whose average is minor than 7 on a scale
from 1 to 10.
At this point two different regressions were run using these two sub-samples.
The regression on the first sub-sample has the same model of the first one (1);
results are as follows:

Coefficients—Dependent variable: Annual grades average


Model Unstandardized Standardized coefficients t-ratio p-value
coefficients
B Std. error Beta
Constant 7.783 0.494 – 15.744 0.000
Usefulness 0.206 0.153 0.279 1.347 0.182
Ease of use −0.127 0.147 −0.133 −0.865 0.390
Satisfaction 0.005 0.128 0.0007 0.035 0.972
(continued)
170 L. Caporarello et al.

(continued)
Model Unstandardized Standardized coefficients t-ratio p-value
coefficients
B Std. error Beta
Preference −0.032 0.131 −0.036 −0.243 0.808
Advantage −0.490 0.118 −0.547 −4.136 0.000
Teachers 0.247 0.112 0.315 2.219 0.029
Classmates 0.027 0.099 0.031 0.271 0.787
Internet 0.064 0.087 0.092 0.739 0.462
Support −0.070 0.110 −0.092 −0.635 0.527
Experience 0.027 0.076 0.038 0.348 0.729

The first sub-sample was composed only by top students. These results prove the
fourth hypothesis to be true in each of its part:
4a. Perceived Usefulness has no significant effect (p-value = 0.182) on top students’
performance.
4b. Top Students do not perceive a comparative advantage of technology in rela-
tionship to books (Advantage coeff. = −0.490, p-value = 0.000).
4c. Teachers’ encouragement has a positive (coeff. = 0.247) significant
(p-value = 0.029) effect on top students’ performance.
The same regression was run on the second sub-sample, composed by low
performing students:

Coefficients—Dependent variable: Annual grades average


Model Unstandardized Standardized coefficients t-ratio p-value
coefficients
B Std. Error Beta
Constant 5.443 0.391 – 13.934 0.000
Usefulness 0.183 0.092 0.195 1.995 0.047
Ease of use −0.033 0.091 −0.027 −0.357 0.722
Satisfaction −0.009 0.081 −0.011 −0.114 0.909
Preference −0.051 0.102 −0.046 −0.503 0.615
Advantage −0.182 0.089 −0.145 −2.043 0.042
Teachers 0.218 0.080 0.237 2.710 0.007
Classmates 0.092 0.079 0.087 1.168 0.244
Internet −0.041 0.063 −0.044 −0.648 0.517
Support −0.122 0.086 −0.128 −1.425 0.155
Experience 0.127 0.070 0.127 1.807 0.072
When Teachers Support Students in Technology Mediated Learning 171

Results show that the fifth hypothesis holds true in each of its part:
5a. Perceived usefulness has a positive (coeff. = 0.182) significant (p-value = 0.047)
effect on performance of bad students.
5b. Perceived Advantage of technology has a negative (coeff. = −0.182) significant
(p-value = 0.042) effect on performance of bad students.
5c. Teachers’ encouragement has a positive (coeff. = 0.218) significant
(p-value = 0.007) effect on bad students’ performance.
5d. Previous experience in the use of technology has a positive (coeff. = 0.127) but
marginally significant (p-value = 0.072) effect on bad students’ performance.
Hypothesis 5b. implies that low performing students perceive the technology to
be useful, but they do not feel a real advantage of technology compared to books.
The key role of teachers’ encouragement is clearly confirmed. Since teachers’
encouragement has been proven to be a key variable in almost every analysis
conducted until now, the original sample was divided into two sub-samples on the
base of teachers’ encouragement perception:

• Students perceiving a high teachers encouragement, who expressed a judgment


equal or greater than 3.5 on a scale from 1 to 5;
• Students perceiving a low teachers encouragement, who expressed a judgment
minor than 3.5 on a scale from 1 to 5.
Two separate regressions were run, one for each sub sample. Results appear in
the tables below:

Coefficients—Dependent variable: Annual grades average


Model Unstandardized Standardized coefficients t-ratio p-value
coefficients
B Std. Error Beta
Constant 7.218 1.370 – 5.268 0.000
Usefulness 0.414 0.224 0.325 1.848 0.068
Ease of use 0.124 0.255 0.059 0.488 0.627
Satisfaction −0.201 0.180 −0.167 −1.119 0.266
Preference −0.497 0.251 −0.261 −1.979 0.051
Advantage −0.577 0.189 −0.379 −3.046 0.003
Teachers 0.461 0.349 0.157 1.321 0.190
Classmates 0.263 0.205 0.141 1.284 0.202
Internet 0.065 0.164 0.043 0.394 0.695
Support −0.160 0.223 −0.086 −0.720 0.473
Experience −0.088 0.158 −0.061 −0.554 0.581
Perceived Teachers Encouragement > 3.5
172 L. Caporarello et al.

Coefficients—Dependent variable: Annual grades average


Model Unstandardized Standardized coefficients t-ratio p-value
coefficients
B Std. Error Beta
Constant 5.582 0.545 – 10.238 0.000
Usefulness 0.238 0.135 0.170 1.763 0.079
Ease of use −0.175 0.126 −0.108 −1.395 0.164
Satisfaction 0.132 0.120 0.117 1.099 0.273
Preference −0.087 0.135 −0.058 −0.643 0.521
Advantage −0.552 0.123 −0.291 −4.487 0.000
Teachers 0.480 0.124 0.274 3.867 0.000
Classmates 0.130 0.101 0.090 1.279 0.202
Internet 0.065 0.083 0.050 0.787 0.432
Support −0.207 0.109 −0.147 −1.899 0.059
Experience 0.159 0.091 0.121 1.735 0.084
Perceived Teachers Encouragement > 3.5

Results show that Hypothesis 6. is confirmed: the first sub-sample shows a


higher impact of Perceived Usefulness (0.414 > 0.238) with a higher significance
(p-value 0.068 < 0.079). Also it’s worth to be pointed out that even if the second
group gives a grade of 3.5 to teachers’ encouragement, it still remains a key
determinant of students’ performance. In order to go further with this analysis on
teachers encouragement a One-Way ANOVA was made, using Perceived teachers
encouragement as discriminant to divide the sample in two groups, and taking
Perceived Usefulness, Perceived Ease of use, Satisfaction, Preference, Comparative
Advantage and Classmates Encouragement as dependent Variable of the ANOVA,
so to understand what judgment the two groups of students gave about all of these
constructs in their questionnaire, to better analyze the effect of Teachers
Encouragement on their qualitative assessment of the use of technology in every
day life at school:

Discriminant: Perceived teachers encouragement


g Means Significance
Perceived usefulness 1 3.334 0.000
2 2.920
Ease of use 1 3.997 0.000
2 3.775
Satisfaction 1 4.025 0.006
2 3.781
Preference 1 3.919 0.001
2 3.711
(continued)
When Teachers Support Students in Technology Mediated Learning 173

(continued)
g Means Significance
Comparative advantage 1 2.887 0.000
2 2.594
Classmates encouragement 1 3.928 0.000
2 2.788

Group 1 represents students who perceive a teachers encouragement greater than


3.5, while group 2 is made by students who perceive teachers encouragement to be
minor than 3.5. It can be easily noticed that there is a significant difference in the
judgments expressed on each of the key variables of the TAM between the first and
the second group of students: those who perceived teachers encouragement to be
higher gave a higher grade to every other variable. This result proves the funda-
mental role played by teachers encouragement on the perception that students have
of technology and, thus on their attitude toward it. Finally, to test the impact of
Intention to Use on Students’ performance, we run a final regression, using Annual
Grades Average as dependent variable, and all of the constructs as independent
variables, including those that were eliminated from the analysis at the beginning.
The model is:
(3) Grades = a + b1 (Usefulness) + b2 (Ease) + b3 (Satisfaction) + b4
(Preference) + b5
(Advantage) + b6 (Teachers) + b7 (Classmates) + b8 (Internet) + b9
(Support) + b10
(Experience) + b11 (Intention) + b12 (Potential) + b13 (Self-Efficacy)
The impact of Intention to Use on the dependent variable resulted to be positive
and significant as shown in the table below:

Coefficients—Dependent variable: Annual grades average


Model Unstandardized Standardized coefficients t-ratio p-value
coefficients
B Std. Error Beta
Constant 6.078 0.455 – 13.344 0.000
Usefulness 0.250 0.111 0.188 2.242 0.026
Ease of use 0.060 0.128 0.035 0.465 0.642
Satisfaction −0.068 0.115 −0.059 −0.594 0.553
Preference −0.167 0.116 −0.105 −1.441 0.150
Advantage −0.524 0.101 −0.304 −5.194 0.000
Teachers 0.422 0.094 0.323 4.488 0.000
Classmates 0.127 0.089 0.084 1.425 0.155
Internet 0.101 0.076 0.078 1.328 0.185
Support −0.197 0.097 −0.146 −2.043 0.042
(continued)
174 L. Caporarello et al.

(continued)
Model Unstandardized Standardized coefficients t-ratio p-value
coefficients
B Std. Error Beta
Experience 0.098 0.077 0.072 1.262 0.208
Intention 0.208 0.105 0.152 1.982 0.048
Potential −0.078 0.074 −0.065 −1.057 0.291
Self-Efficacy −0.185 0.090 −0.130 −2.056 0.041

This result proves the seventh hypothesis to hold true:


7. Intention to use has a positive (coeff. = 0.208) and significant (p-value = 0.048)
effect on students’ performance.

5 Debate and Conclusions

This work gives a contribution to the understanding and application of


technology-mediated learning. Consistent with previous studies, we demonstrated
that we couldn’t expect technology per se to revolutionize teaching and learning.
Indeed, the effectiveness of technology is tied to the organizational environment in
which it is implemented and on the characteristics of the users. Specifically, our
research is consistent with the current research debate on the interplay between the
contextual conditions and the user’s behaviors. Students of experimental classes, in
our research project, are using tablets as the only working tool, to access digital
resources and produce digital output. In the proposed multivariate model, consid-
ering the characteristics of the individual is possible to outline that previous
experience was found not to be a significant determinant of learning effectiveness.
This is an important finding that can be tied to the current literature on the Y Gen.
Indeed, the lack of significance of previous experience can be traced back to the fact
that individuals are already digital-native and they do not need previous experience
to feel comfortable with the system. This finding paves the way for future research
on the effect of the use of technology-based tools for enhancing individual learning
across different generation. From our perspective it could be possible that there
would be the need to approach the tech-based learning environment in different
ways, in fit with the different generational attitudes and experiences.
The multivariate model stressed the highly significant and positive effect of
“Teachers’ Encouragement”, thus making it an interesting variable to analyze
deeper for its implications. This aspect is particularly critical because it represents a
first perspective for looking at the effect of the environment. Indeed, the teachers
mold the climate of the classroom providing support and encouraging students in
exploring different ways of learning. Conversely, the effect of “Classmates’
Encouragement” was mitigated with the introduction of other explanatory variables
When Teachers Support Students in Technology Mediated Learning 175

(in particular “Teachers’ Encouragement”) that made it statistically not significant.


This result corroborates even more the importance of the teachers in these kinds of
environments and suggests that the introduction of a tech-based learning system
depends from the approach of the teacher. In particular, from a practical standpoint,
institutions that want to introduce such kind of project should work closely with the
teachers and train them in order to facilitate the process of introduction. It would
require a higher investment in term of resources and time, but since its influence on
the learning outcome, it can be considered a critical step to monitor. Similarly,
“Technical Support Availability” effect was absorbed by the introduction of
“Teachers’ Encouragement” in the regression, which made its coefficient negative.
This evidence could be traced back again to the way students belonging to the Y
Gen approach their learning process through tech-based tools. Indeed, the technical
aspect related to the technology is not approached through an institutional way to
understand how to approach the system, rather, the appropriation of the basic
characteristics of the system pass through the social interaction with other indi-
viduals. Furthermore, results show that high-graded students have amplified per-
ceptions of these constructs. The most significant difference is found for the
teachers’ perceived support, which confirms to be the most relevant variable in this
study. Finally, the research found that traditional classes performed better in sci-
entific subjects, while experimental classes had a better performance in humanistic
subjects. This suggests that scientific subjects in digital form may represent a
greater break with traditional practices and therefore calls for a greater under-
standing of learning dynamics. Future research could focus on teachers’ training,
their teaching methods and the choice of resources. They are all factors potentially
determining the perceived “Encouragement” as described in this paper. Finally, as
the consumerization of IT spreads mainly among young generations, an interesting
research opportunity would be to explore the introduction of digital learning earlier
at school. Aware of the potential of such initiatives, some primary schools are
already experimenting education mediated by tablet. This trajectory could be
beneficial for the educational institution on the advantages in terms of decision
making and learning processes that are tied to the introduction of systems that
support individuals in the sharing [40], managing and exchange information [41].

References

1. Avvisati, F., Hennessy, S., Kozma, R.B., Vincent-Lancrin, S.: Review of the Italian Strategy
for Digital Schools, OECD Education Working Papers, 90 OECD Publishing. http://dx.doi.
org/10.1787/5k487ntdbr44-en (2013)
2. Dejaeger, K., Goethals, F., Giangreco, A., Mola, L., Baesens, B.: Gaining insight into student
satisfaction using comprehensible data mining techniques. Eur. J. Oper. Res. 218(2), 548–562
(2012)
3. North-Samardzic, A., Braccini, A.M., Spagnoletti, P., Za, S.: Applying media synchronicity
theory to distance learning in virtual worlds : a design science approach. Int. J. Innov. Learn.
15(3), 328–346 (2014)
176 L. Caporarello et al.

4. Spagnoletti, P., Resca, A.: A design theory for IT supporting online communities. In:
Proceedings of the 45th Hawaii International Conference on System Sciences, pp. 4082–4091
(2012)
5. Sorrentino, M., De Marco, M.: Implementing e-government in hard times: when the past is
wildly at variance with the future. Inf. Polity 18(4), 331–342 (2013)
6. Mosconi, E.M., Silvestri, C., Poponi, S., Braccini, A.M.: Public policy innovation in distance
and on-line learning: reflections on the italian case. In: Spagnoletti, P. (ed.) Organizational
Change And Information Systems—Working and Living Together in New Ways, pp. 381–
389. Springer, Berlin (2013)
7. Ruggieri, A., Mosconi, E.M., Poponi, S., Braccini, A.M.: Strategies and policies to avoid
digital divide: the italian case in the european landscape. In: Mola, L., Pennarola, F., Za, S.
(eds.) From Information to Smart Society—Environment. Springer, Politics and Economics
(2014)
8. Brynjolfsson, E.: The productivity paradox of information technology. Commun. ACM 36
(12), 66–77 (1993)
9. Devaraj, S., Kohli, R.: Performance impacts of information technology: Is actual usage the
missing link? Manage. Sci. 49(3), 273–289 (2003)
10. Venkatesh, V., Goyal, S.: Expectation disconfirmation and technology adoption—polynomial
modeling and response surface analysis. MIS Q. 34(2), 281–303 (2010)
11. Venkatesh, V., Davis, F.D.: A theoretical extension of the technology acceptance model—4
longitudinal field studies. Manage. Sci., Informs. 46(2), 186–204 (2000)
12. Venkatesh, V.: Determinants of perceived ease of use, Integrating Control, Intrinsic
Motivation and Emotion into the TAM. Inf. Syst. Res. Informs. 11(4), 342–365 (2000)
13. Venkatesh, V., Morris, M.G., Davis, G.B., Davis, F.D.: User acceptance of information
technology—toward a unified view. MIS Q. 27(3), 425–478 (2003)
14. Venkatesh, V., Bala, H.: Technology acceptance model 3 and a research agenda on
interventions. Decis. Sci. 39(2), (2008)
15. Venkatesh, V., Thong, J.Y.L., Xu, X.: Consumer acceptance and use of information
technology—extending the unified theory of acceptance and use of technology. MIS Q. 36(1),
157–178 (2012)
16. Davis, F.D.: Perceived usefulness, perceived ease of use, and user acceptance of information
technology. MIS Q. 13(3), 319–340 (1989)
17. Adams, D.A., Nelson, R.R., Todd, P.A.: Perceived usefulness, ease of use, and usage of
information technology: a replication. MIS Q. 16, 227–247 (1992)
18. Hendrickson, A.R., Massey, P.D., Cronan, T.P.: On the test-retest reliability of perceived
usefulness and perceived ease of use scales. MIS Q. 17, 227–230 (1993)
19. Segars, A.H., Grover, V.: Re-examining perceived ease of use and usefulness: a confirmatory
factor analysis. MIS Q. 17, 517–525 (1993)
20. Subramanian, G.H.: A replication of perceived usefulness and perceived ease of use
measurement. Decis. Sci. 25(5/6), 863–873 (1994)
21. Szajna, B.: Software evaluation and choice: predictive evaluation of the Technology
Acceptance Instrument. MIS Q. 18(3), 319–324 (1994)
22. Brown, S.A., Venkatesh, V.: Model of adoption of technology in households: a baseline model
test and extension incorporating household life cycle. MIS Q. 29(3), 399–426 (2005)
23. Davis, F.D., Bagozzi, R.P., Warshaw, P.R.: Extrinsic and intrinsic motivation to use
computers in the workplace. J. Appl. Soc. Psychol. 22, 1111–1132 (1992)
24. Venkatesh, V.: Creation of favorable user perceptions- exploring the role of intrinsic
motivation. MIS Q. 23(2), (1999)
25. Nicholson, J., Nicholson, D., Valacich, J.S.: Examining the effects of technology attributes on
learning—A contingency perspective. J. Inf. Technol. Educ. 7 (2008)
26. Hu, P.J., Hui, W.: Examining the role of learning engagement in technology-mediated learning
and its effects on learning effectiveness and satisfaction. Decis. Support Syst. 53, 782–792
(2012)
When Teachers Support Students in Technology Mediated Learning 177

27. Bretz, R.D., Judge, T.A.: Realistic job previews: a test of the adverse self-selection hypothesis.
J. Appl. Psychol. 83, 330–337 (1998)
28. Taylor, S., Todd, P.A.: Understanding information technology usage: a test of competing
models. Inf. Syst. Res. 6(2), 144–176 (1995)
29. Bergeron, F., Rivard, S., Serre, L.: Investigating the support role of the information center.
MIS Q. 14(3), 247–260 (1990)
30. Cragg, P., King, M.: Small-firm computing: motivators and inhibitors. MIS Q. 17(1), 47–60
(1993)
31. Harrison, D.A., Mykytyn, P.P., Riemenschneider, C.K.: Executive decisions about adoption of
information technology in small business: theory and empirical tests. Inf. Syst. Res. 8(2), 171–
195 (1997)
32. Venkatesh, V., Davis, F.D.: A model of the antecedents of perceived ease of use: development
and test. Decis. Sci. 27, 451–481 (1996)
33. Karahanna, E., Straub, D.W., Chervany, N.L.: Information technology adoption across time: a
cross-sectional comparison of pre-adoption and post- adoption beliefs. MIS Q. 23, 183–213
(1999)
34. Bharracherjee, A., Premkumar, G.: Understanding changes in belief and attitude toward
information technology usage: a theoretical model and longitudinal test. MIS Q. 28, 229–254
(2004)
35. Bhattacherjee, A.: Understanding information systems continuance: an
expectation-confirmation model. MIS Q. 25, 351–370 (2001)
36. Rai, A., Lang, S., Welker, R.: Assessing the validity of IS success models: an empirical test
and theoretical analysis. Inf. Syst. Res. 13, 50–69 (2002)
37. Delone, W.H., McLean, E.R.: The DeLone and McLean model of information systems
success: a ten year update. J. Manage. Inf. Syst. 19(4), 60–95 (2003)
38. Alavi, M., Leidner, D.E.: Research commentary: technology-mediated learning—a call for
greater depth and breadth of research. Inf. Syst. Res. 12(1), 1–10 (2001)
39. Casalino, N., Buonocore, F., Rossignoli, C., Ricciardi, F.: Transparency, openness and
knowledge sharing for rebuilding and strengthening government institutions. In: IASTED
Multiconferences-Proceedings of the IASTED International Conference on Web-Based
Education, WBE 2013, pp. 866–871 (2013)
40. Casalino, N., Buonocore, F., Rossignoli, C., Ricciardi, F.: Transparency, openness and
knowledge sharing for rebuilding and strengthening government institutions. In: IASTED
Multiconferences-Proceedings of the IASTED International Conference on Web-Based
Education, WBE 2013, pp. 866–871 (2013)
41. Zardini, A., Mola, L., vom Brocke, J., Rossignoli, C.: The shadow of ECM: the hidden side of
decision processes. In: Respício, A., Adam, F., Phillips-Wren, G., Teixeira, C., Telhada,
J. (eds.) Bridging the Socio-technical Gap in Decision Support Systems, 212, pp. 3–12, IOS
Press, Amsterdam, Holland (2010)
How Do Academic Spin-off Companies
Generate and Disseminate Useful Market
Information Within Their Organizational
Boundaries?

Tindara Abbate and Fabrizio Cesaroni

Abstract From a market orientation perspective, this study intends to examine


how small high-tech firms generate, disseminate and integrate information on
customers’ needs, competitors’ activities and market forces within their organiza-
tional boundaries in order to define and to implement effective strategies. We
perform an explorative qualitative analysis based on Italian and Spanish academic
spin-off firms. We find that the activities of generation, dissemination and inte-
gration of market information are crucial to develop technological innovations and
to obtain positive performance, even if these activities require the definition and
development of a sophisticated marketing information system, as well as the
availability of economic resources and specialized competences, often more limited
in these small firms.

Keywords Academic spin-off companies 


Market orientation  Information
 
generation Information dissemination Information integration

1 Introduction

A commonly accepted evidence about small high-tech firms is that, relative to


larger firms and firms operating in a non-high-tech environment, they often show a
lower rate of growth and a lower sustainability of competitive advantage [1, 2].
Among alternative explanations of this evidence, prior research has pointed to the

T. Abbate (&)  F. Cesaroni


Department of Economics and Business (SEAM), University of Messina, Messina, Italy
e-mail: abbatet@unime.it; tindara.abbate@unime.it
F. Cesaroni
e-mail: fabrizio.cesaroni@unime.it
F. Cesaroni
Istituto di Management, Scuola Superiore Sant’Anna, Pisa, Italy

© Springer International Publishing Switzerland 2016 179


C. Rossignoli et al. (eds.), Organizational Innovation and Change,
Lecture Notes in Information Systems and Organisation 13,
DOI 10.1007/978-3-319-22921-8_14
180 T. Abbate and F. Cesaroni

difficulties that such firms face in designing and implementing appropriate policies,
routines and organizational arrangements that are necessary to convert technolog-
ical knowledge into successful products and, therefore, in the commercial exploi-
tation of technological competences [3]. Indeed, in high-tech environments, firms
erroneously are used to believe in the superiority of their technological solutions,
such that the quality of their high-tech products should be sufficient to encourage
customers to prefer and acquire their products with respect to those of their
competitors.
In this sense, small high-tech firms suffer of different types of “myopia”, which
bring them to suppose that: (a) their technologies are radically new and do not face
any competition; (b) technologies commercialized by competitors do not represent a
great threat; (c) competitors operate in different sectors and their strategies do not
have any relevant impact on their businesses [3].
By contrast, in order to convert a potential technological superiority into a
competitive advantage, high-tech firms should be able to integrate their techno-
logical capabilities with adequate marketing capabilities that are needed to under-
stand customers’ needs, to assess competitors’ assets and to realize effective
strategic actions. In other words, they should adopt and implement a necessary
market orientation [4–6]. The definition and implementation of these monitoring
and analytical activities, however, requires the development of specialized
resources, competences and capabilities that are focused on the generation, diffu-
sion within the firm’s boundaries, and integration among the firm’s functions of
relevant market information [6], that is, information related to key customers,
competitors and other market forces.
Against this background, the aim of this study is to assess whether and how
small high-tech firms engage in the articulated activities of information generation,
diffusion and integration, and which factors make the definition and the imple-
mentation of such activities difficult. To perform this analysis, we focus on the
specific category of small high-tech firms represented by academic spin-off firms.
As a matter of facts, exactly because academic spin-off firms originate from research
conducted within the universities, many entrepreneurs are more focused on the
technological/technical aspects of their innovations than on commercial aspects
[7, 8]. Thus, gathering and analyzing necessary market information are particularly
critical tasks for these firms, exactly because they operate in high-tech markets
characterized by uncertain environmental conditions, and thus they need to excel
not only at generating new innovations, but also at commercializing such innova-
tive solutions.
In order to address our research question, we perform in-depth interviews with
academic spin-off managers. We thus assess both how information related to cus-
tomers, competitors and other market forces is collected, examined, integrated,
disseminated and employed by them to make marketing decisions, and also which
obstacles impede a wider implementation of these activities among such a category
of firms.
How Do Academic Spin-off Companies Generate and Disseminate … 181

2 Theoretical Background

Academic spin-off firms represent a concrete answer to the desire to exploit and
transfer technological knowledge, grounded in scientific discoveries and explora-
tions, which can consequently be incorporated in new products and services [9].
More specifically, an academic spin-off is a new company “that is formed by a
faculty, staff member, or doctoral student who left the university or research
organization to found the company or start the company while still affiliated with
the university, and/or a core technology (or idea) that is transferred from the parent
organization” [10]. Several contributions of managerial literature have examined
different dimensions of academic spin-off companies: abilities, competences,
motivational and personal characteristics of academic entrepreneurs and/or team
formation, underlying mainly their high propensity to independence and their low
need for affiliation [11–14]; real impact of universities’ policies and procedures on
commercialization activities [15]; external factors (e.g., knowledge infrastructures,
national legislation, venture capital, etc.) that impact on both university’s and
spin-off’s activities [16]; and, finally, growth and business performance of aca-
demic spin-offs [17, 18]. Relatively to the latter, scholars have highlighted that
academic spin-offs, similarly to other high technology companies, exhibit a low rate
of growth in terms of sales, cash flows and employees. They also show a lower
likelihood to obtain profits [1, 2].
Among the reasons of the problems related to their growth and their competi-
tiveness, two causes have been identified: (a) the emphasis on technological aspects
of their innovations generated within universities, and (b) the lack of a managerial
approach to generate, analyse and disseminate necessary information related to
external market forces beyond the general confusion on the role of marketing in
their organizations. Therefore, academic spin-off companies have several difficul-
ties in assuming, developing and implementing effective marketing strategies,
policies and tools that are effectively necessary to identify their profitable market
segments, to commercialize their innovative high-tech products/services, to prop-
erly position their value propositions, to outperform their competitors and, finally,
to maximize the rate of success [3]. Overall, previous studies have thus revealed
that academic spin-off companies need to go beyond their technology innovations
and have to develop a market orientation that concretely implies “gathering,
sharing and using information about market (customers, competitors, collaborators,
technology, trends, etc.) to make decisions that lead to the creation of superior
customer value” [3].
The concept of market orientation has received a growing interest by scholars,
who have substantially debated its theoretical and practical implications [6, 19, 20].
Although the streams of research literature on this field are diverse [6, 20], in this
study Kohli and Jaworsky’s conceptualization of market orientation is adopted,
defined as the “organization-wide generation of market intelligence pertaining to
customers, competitors, and forces affecting them, internal dissemination of the
intelligence, and reactive as well as proactive responsiveness to the intelligence”
182 T. Abbate and F. Cesaroni

[6, p. 131]. Therefore, concentrating on the marketing concept as a fundamental


organizing principle of the firm [21], they provided a useful interpretation of market
orientation in terms of three main dimensions [6]:
(a) the generation of a wide array of market intelligence concerning the expressed
and latent needs of customers, the strategies and capabilities adopted by
competitors (i.e., positional advantages, industry structure), and other relevant
environmental factors (i.e., emerging technologies outside and inside industry,
etc.) for the purpose of supporting firms’ strategic/tactical/operational
decisions;
(b) the dissemination of market intelligence across people, divisions and depart-
ments through formal and informal communication processes (i.e., meetings,
databases, newsletters, etc.);
(c) the intelligence integration within the organization’s boundaries for obtaining
suitable market knowledge assets; and, finally,
(d) the responsiveness to this market intelligence in a more unpredictable
environment.
In turn, through the continuous acquisition of information regarding customers,
competitors and market forces and the sharing of this knowledge throughout the
organizational boundaries, market-oriented firms are able to identify and develop
abilities and capabilities that are necessary for delivering superior customer value
and obtaining long-term performance [21, 22].
Relatively to this latter aspect, a significant body of research has examined the
relationship of market orientation and firms’ performance [6, 23–25]. Several
empirical studies highlighted that market orientation influences positively the
business performance: (a) financial performance, considered in terms of profit-
ability, cash flow, sales growth; (b) market performance, defined in terms of market
share, new product performance, customer satisfaction and customer loyalty; (c) the
firm’s ability to innovate, defined in terms of new product success, patents, gen-
eration of new knowledge on target technology; and (d) organizational learning,
related to an interesting culture of experimentation and an emphasis on constantly
improving the firm’s processes and systems [22]. However, this positive relation-
ship between market orientation and firms’ performance could be moderated by
several external forces like market uncertainty, technological turbulence and
competitive intensity [6].

3 Research Design

To achieve the aims of this research, we performed a qualitative explorative study,


which is deemed the most appropriate and functional method for our purposes and
the method that, relative to alternative methods, allows a more precise and detailed
examination of the subject matter [26]. More specifically, we adopted the multiple
case study approach, which is suggested to increase the methodological rigor of the
How Do Academic Spin-off Companies Generate and Disseminate … 183

study [27, p. 29]. Furthermore, the evidence from this approach “is often considered
more compelling” [26, p. 45].
The selected cases were four academic spin-off companies that operate in dif-
ferent sectors (e.g., ICT services and materials engineering) and are located in two
different Countries (Italy and Spain). Although they have been created between
2002 and 2006, selected spin-offs are still small firms, with 5–15 employees on
average. Companies, founded after 2006, have not been included in this study,
because some relevant aspects characterizing market orientation and its main
dimensions are not investigable in such young companies.
The data were accurately gathered with multiple methods by utilizing a trian-
gulated research strategy, which implies the use of different type of materials,
methods and investigators in the same study [28]. The primary data source consists
of in-depth face-to-face interviews with entrepreneurs and/or marketing managers
of four firms to explore specific aspects related to the market orientation construct,
such as the motivation that drove the implementation or not implementation of a
market orientation, the difficulties linked principally to definition and development
of its main dimensions and, finally, the relationship between market orientation and
business performance.
We performed semi-structured interviews with entrepreneurs and marketing
managers because in these companies they are key informants and respondents for
the reason that they have detailed information about of companies’ operations and
conditions [29]. Each manager received an email explaining the general purpose of
the study. We realized four interviews from March to April 2014, and each interview
lasted approximately 2 h. Interviews were conducted following to the traditional
methodological prescriptions on data collection through personal interviews [30].
To complement primary data, we collected information from secondary sources,
mainly Internet documents, such as publicly available information from company
web sites, reports on the firms’ business activities, history cases, observations,
official documents, published interviews.

4 Results

Our analysis allows highlighting critical aspects concerning how academic spin-off
companies adopt a market orientation and also how they generate, disseminate and
integrate information on customers’ needs, competitors’ strategies and the com-
petitive environment within their organizations for use in making marketing
decisions.
Firstly, some of the academic entrepreneurs interviewed recognize the impor-
tance of defining, conceptualizing and operationalizing marketing activities oriented
to understand the principal characteristics of their customers, their main require-
ments and their preferences. In this respect, a marketing manager underlines that:
“although in many high-tech environments needs, requirements and expectation of
customers evolve more rapidly over time, the necessity to analyse and to understand
184 T. Abbate and F. Cesaroni

them is critical for developing and commercializing products/services with the right
set of features that meet and satisfy customer needs in a fascinating way. We can
generate novel solutions, characterized by high-quality and excellence in technol-
ogies, but the customers represent ineluctable premise and decree unscrupulously
our destiny in terms of survival and success in the competitive market place”.
In this perspective, academic spin-off companies are trying to develop and to
assume an operational focus on several marketing activities aimed at gathering and
utilizing information about customers’ expressed and latent needs. Furthermore,
they realize the importance of discovering, understanding and pursuing market
opportunities that are not known to competitors.
On the other hand, they are increasingly comprehending that nowadays firms
with a strong technological base have to effectively incorporate customer’s
knowledge into their product development processes, since such an effort can be
useful for the creation of innovations and the commercialization of outcomes of
innovative processes into the type of successful products/services that meet con-
sumer needs and expectations, and delivery value.
In this way, the customer’s role changes by moving from passive recipient of
information flow concerning products and services developed by companies, to
competent and suitable knowledge source that firms can stimulate and involve in
their innovation processes. Therefore, high-tech firms have to conceive and realize
new suitable opportunities for assuming continuous and systematic information
pertaining the skills, competencies and capabilities of their customers because this
appears a key condition to achieve marketplace success. In fact, the participation
and the collaboration of customers can be a strategic way of stimulating creativity
and innovation, and designing synergic outputs (derived from a gradual and
articulated process of interactions among the involved parties). In turn, customers’
involvement in innovative processes may realise several benefits and, primarily,
may allow firms to discover the best innovative solutions to different problems,
which are often too easily expressed. In this respect, an academic entrepreneur
highpoints that “the type of clients that needs frequent contacts for developing
solutions to daily problems and configuring prototypes fostering our creativity has
stimulated advantageous forms of participation in our internal R&D processes and
suitable collaborations”. Also, he stresses that “some of our products are the result
of intensive processes of exchange and collaboration between firm and customer,
which is involved from idea generation and product design to test of prototypes,
permitting to eliminate defects and reduce the risk to failure”.
In these circumstances, customers can cover different phases of innovation
processes. Organizations thus may choose to work with these parties in order to
anticipate the emerging market needs (which usually take a long time before the
mass marketplace realizes their importance), to personalize products for their needs
and, consequently, to face market uncertainty. In turn, academic spin-off companies
can acquire knowledge sources at low cost and accelerate the time-to-market of
their products/technologies in turbulent and more competitive environments. In this
way, they have easily access to the social dimension of customer knowledge and
How Do Academic Spin-off Companies Generate and Disseminate … 185

gradually extend the reach and scope of customers to interact with, thus enhancing
innovation and business performance.
Secondly, some academic spin-offs are beginning to have regular gathering,
analysis and interpretation of information inherent to the adopted market strategies,
the main strengths and the weaknesses of the key players that offer similar products
or products having similar functionalities that intend to capture the same market
demand.
One of the entrepreneurs highlights that “we are focused on the competitor’s
features, their mechanisms and significant tactical activities, their innovation pro-
cesses and their innovation performance in terms of new patents, licenses, tech-
nological platforms/underpinnings. This is because they impact and change the
rules and the logics of the game”. In addition, the same interviewee stresses that:
“although size and dimensions of our companies do not allow us to assuming a
significant role and influencing really the external competitive environment, we
have only a strategic possibility consisting in the identification of market spaces not
explored and engaged by our competitors, by designing focused new propositions
and obtaining positive business performance in long-term”.
By recognizing the relevance of these questions, spin-off firms make efforts
oriented to the gathering, analysis and dissemination of competitor information,
which regard the following aspects: characteristics of their proposals, focusing on
the applications of technological bases, more relevant for upcoming innovations;
availability of resources and competencies that are valuable, difficult for competi-
tors to imitate and, then, explain the obtained advantage positions; cost structures;
capabilities to develop continually technological innovation through which to
maintain their leadership over time; patent portfolios that increase their contractual
power.
Furthermore, an interesting aspect for this type of companies is represented by
the underestimated opportunity of gathering, analysing and sharing information
about indirect rivals and key potential competitors because they often come outside
existing industry boundaries and the competition will be concentrated on product
classes. Thus, the lack of this focus on their goals, resources and capabilities
reduces the possibilities to design and to elaborate all modifications that high-tech
environment necessitates, moving from an attitude merely responsive to markets’
evolutionary phenomena to an anticipatory attitude that requires efforts of fore-
casting events in the competitive environment.
Finally, almost all the academic spin-off companies we have interviewed have
strongly underlined that the activities related to the main dimensions of market
orientation require the availability of a set of suitable resources. More specifically,
these resources are: human resources with specialized competences and capabilities;
economic and financial resources to support investments (e.g., planning and
development of information marketing systems); and technical/technological
resources for the systematic generation, integration and elaboration of customer
intelligence and competitor intelligence within and across people and department of
the organization’s boundaries.
186 T. Abbate and F. Cesaroni

Overall, interviewers highlight that these resources should be oriented to the


planning and development of sophisticated marketing information systems that
permit to continuously and systematically gathering, analysing, interpreting and
using market information to make strategic decisions, manage interactions and
long-term relationships with customers to enhance the possibilities to offer them
propositions that meet their current/future needs, develop high-quality technological
innovations, and, finally, create and support R&D-marketing interactions.
Therefore, while highlighting the need to develop marketing information systems
for the purpose of identifying, measuring, and forecasting marketing opportunities,
besides analysing market segments, these companies underline that the availability
of key resources is very limited. They also stress that the effort required to acquire
such resources might exceed their possibilities.
On the other hand, technological resources are perceived to be relevant in
organizational environments in which advanced technologies—corporate intranet
and extranet, collaborative platforms—foster and support people to generate, share
and integrate knowledge on specific fields, activating efficient mechanisms used to
transfer it. However, a large number of interviewed spin-off firms declares that the
scarcity of these resources poses several questions about the opportunity to sustain
investments aimed at increasing their information assets (i.e., data and information
on customers, competitors, other market forces) rather than using the same
resources for other activities considered more profitable in short-term. Crucially for
this aspect, one of the respondents highlights that “we often are obligated to choose
projects of creation and development of technological innovation (i.e., incremental
innovation), considered more coherent with our goals and more profitable espe-
cially for spin-off, rejecting other projects related to marketing activities, although
certainly needed to identify desirable customers and then keep them satisfied.
This is because our economic and financial resources are limited and [the lack of
resources] influences strongly our decisions, our directions of development and our
main activities”.
In this condition, these companies are strongly focused on an inevitable choice:
to define research intensive projects and to explore potential applications that allow
integrating, improving and completing their technical knowledge (either with or
without formal protection mechanisms). In summary, they continue to choose “the
natural way for university spin-off firms”.

5 Conclusion

Our analysis shows that the generation, dissemination and integration of informa-
tion on customers’ needs and requirements, competitors’ strategies and actions, and
other market forces are relevant activities for academic spin-off companies. These
companies should recognize market orientation as a key driver of market infor-
mation processing activity and incorporate it within their innovation processes.
Thus, academic spin-off firms should acquire, collect, and disseminate information,
How Do Academic Spin-off Companies Generate and Disseminate … 187

and respond to information obtained essentially from customers, competitors and


other channels. In addition, they should consider the opportunities derived by the
participation and collaboration of customers, as source of knowledge and compe-
tencies, in their innovation processes.
However, market information management constitutes a relevant challenge to
academic spin-off companies, which show a scarce availability of economic and
human resources to concretely commit to these useful activities. In fact, the defi-
nition and the implementation of the activities determine several problems related
essentially to people, methods and procedures, which have to be organized and
managed in an efficient manner. In turn, our study contributes both to the academic
spin-off literature, by focusing on the principal reasons of their low growth pro-
pensity, and to managerial practice, by showing how several benefits might be
obtained by these high-tech companies through the adoption of a market-oriented
perspective.
The conclusions of this research need to be considered in the light of its limi-
tations that are mainly represented by the exiguous number of case studies here
investigated. Therefore, the results may not be generalized across the academic
spin-off population. Future research should design a quantitative analysis to
enhance the results discussed here and also try to provide evidence for the rela-
tionship between market orientation and performance. Finally, investigation of the
concrete implementation of marketing information systems and their main
opportunities/difficulties remains an important future research direction.

References

1. Zhang, J.: The performance of university spin-offs: an exploratory analysis using venture
capital data. J. Technol. Transfer 34, 255–285 (2009)
2. Ortìn-Angel, P., Vendrell-Herreo, F.: University spin-offs vs. other NTBFs: total factor
productivity differences at outset and evolution. Technovation 34(2), 101–112 (2013)
3. Mohr, J., Sengupta, S., Slater, S.: Marketing of high-technology products and innovations. 3rd
ed. Pearson Education, Inc., Upper Saddle River, New Jersey (2010)
4. Dutta, S., Narasimhan, O., Rajiv, S.: Success in high-technology markets: is marketing
capability critical? Marketing Science 18(4), 547–568 (1999)
5. Baker, W.E., Sinkula, J.M.: The complementary effects of market orientation and
entrepreneurial orientation on profitability in small businesses. J. Small Bus. Manage. 47(4),
443–464 (2009)
6. Kohli, A.K., Jaworski, B.J.: Market orientation: the construct, research proposition, and
managerial implications. J. Mark. 54(2), 1–18 (1990)
7. Lockett, A., Wright, M., Franklin, S.: Technology transfer and universities’ spinout strategies.
Small Bus. Econ. 20, 185–200 (2003)
8. Wright, M., Birley, S., Mosey, S.: Entrepreneurship and university technology transfer.
J. Technol. Transfer 29(3–4), 235–246 (2004)
9. Clarysse, B. Wright M., van de Velde, E.: Entrepreneurial origin, technological knowledge,
and the growth of spin-off companies. J. Manage. Stud. 48(6), 1420–1442 (2011)
10. Steffensen, M., Rogers, E.M., Speakman, K.: Spin-offs from Research Centers at a Research
University. J. Bus. Ventur. 15(1), 93–111 (1999)
188 T. Abbate and F. Cesaroni

11. Roberts, E.B., Malone, D.E.: Policies and structures for spinning off new companies from
Research and Development Organizations. R&D Manage. 26(1), 17–48 (1996)
12. Franklin, S.J., Wright, M., Lockett, A.: Academic and surrogate entrepreneurs in university
spin-out companies. J. Technol. Transfer 26(1–2), 127–141 (2001)
13. Clarysse, B., Moray, N.: A process study of entrepreneurial team formation: the case of a
research based spin-off. J. Bus. Ventur. 19(1), 55–79 (2004)
14. O’Shea, R., Allen, T.J., Chevalier, A., Roche, F.: Entrepreneurial orientation, technology
transfer and spinoff performance of U.S. Universities. Res. Policy 34(7), 994–1009 (2005)
15. Siegel, D.S., Waldman, D.A., Atwater, L.E., Link, A.N.: Toward a model of the effective
transfer of scientific knowledge from academicians to practitioners: qualitative evidence from
the commercialization of university technologies. J. Eng. Tech. Manage. 21(1/2), 115–142
(2004)
16. Wright, M., Clarysse, B., Lockett, A., Binks, M.: Venture capital and university spin-outs.
Res. Policy 35, 481–501 (2006)
17. Mustar, P., Renault, M., Colombo, M.G., Piva, E., Fontes, M., Lockett, A., Wright, M.,
Clarysse, B., Moray, N.: Conceptualising the heterogeneity of research-based spin-offs: a
multi-dimensional taxonomy. Res. Policy 35(2), 289–308 (2006)
18. Ensley, M.D., Hmieleski, K.M.: A comparative study of new venture top management team
composition, dynamics and performance between university-based and independent start-ups.
Res. Policy 34, 1091–1105 (2005)
19. Shapiro, B.P.: What the hell is market oriented? Harvard Bus. Rev. 66, 119–125 (1988)
20. Narver, J.C., Slater, S.F.: The effect of a market orientation on business profitability. J. Mark.
54(4), 20–35 (1990)
21. Day, G.S.: The Capabilities of market-driven organization. J. Mark. 58(4), 37–52 (1994)
22. Kumar, V., E. Jones, Venkatesan, R., Leone, R. P.: Is market orientation a source of
sustainable competitive advantage or simply the cost of competing? J. Mark. 75(1), 16–30
(2011)
23. Jaworski, B.J., Kohli, A.K: Market orientation: antecedents and consequences. J. Mark. 57(3),
53–70 (1993)
24. Kirca, A.H., Jayachandran, S., Bearden, W.O.: Market orientation: a meta-analytic review and
assessment of its antecedent and impact on performance. J. Mark. 69, 24–41 (2005)
25. Ellis, P.: Market orientation and performance: a meta-analysis and cross-national comparisons.
J. Manage. Stud. 43, 1089–1107 (2006)
26. Yin, R.K.: Case study research: design and methods. Sage, Thousand Oaks, CA (2003)
27. Miles, M.B., Huberman, A.M.: Qualitative data analysis: an expanded sourcebook. Sage
Publications, Thousand Oaks, CA (1994)
28. Denzin, N.K.: The research act: a theoretical introduction to sociological methods.
McGraw-Hill, New York (1978)
29. Deshpande, R., Farley, J.U.: Organizational culture, market orientation, innovativeness, and
firm performance: an international research odyssey. Int. J. Res. Mark. 21(1), 3–22 (2004)
30. Lee, T.: Using qualitative methods in organizational research. Sage, Thousand Oaks, CA
(1999)
A Two Step Procedure for Integrated
Inventory—Supply Chain Management
Information Systems

Daniela Ambrosino and Anna Sciomachen

Abstract In this work we present a two step procedure aimed at integrating


inventory and distribution functions for balancing stock levels in distribution sys-
tems. In particular, we analyse the flow of products within a multi-echelon,
multi-channel distribution network, with the aim of minimizing logistic costs. The
key issue of the present paper is that cost minimization is searched whilst granting a
certain customers’ service level, here expressed in terms of percentage of fulfilled
demand. Moreover, the present paper focus on the gain that company should derive
finding balanced stock levels in the whole network, that is the gain derived by using
integration network management models and integrated inventory—supply chain
management information systems. This integration is possible only if inventories’
information related to the whole network are available. Results of computational
experimentations aimed at comparing different inventory management policies are
presented.


Keywords Integrated inventory management Distribution systems  Customer

satisfaction Role of information sharing in the integration functions

1 Introduction

Few years ago, in a more and more competitive and aggressive market, companies
developed the logistic function, devoted to manage the flows of information and
goods in the logistic system, in order to improve the customer service level and
control the logistic costs. Nowadays, these companies have to reorganise their

D. Ambrosino (&)  A. Sciomachen


Department of Economics and Business Studies, University of Genova,
Via Vivaldi, 5, 16126 Genoa, Italy
e-mail: ambrosin@economia.unige.it
A. Sciomachen
e-mail: sciomach@economia.unige.it

© Springer International Publishing Switzerland 2016 189


C. Rossignoli et al. (eds.), Organizational Innovation and Change,
Lecture Notes in Information Systems and Organisation 13,
DOI 10.1007/978-3-319-22921-8_15
190 D. Ambrosino and A. Sciomachen

logistic and supply chain management systems in order to meet changes and
flexibility, and, above all, guarantee a high level of service as a key factor for being
competitive. Information sharing in the whole chain is a key factor for meeting
flexibility. Moreover, the introduction of the Electronic Commerce (EC) has
induced changes and problems arising in distribution channels, that are completely
new and impact on the increasing customer service expectations [4].
A review of supply chain management operations in a multi-channel distribution
with EC channel is presented in [1], where different managerial planning tasks for
the activities involved at each level of the supply chain are reported, together with
the corresponding quantitative models; some strategies for the inventory manage-
ment are also described. In [5] a survey on supply chain management literature that
focuses on the innovative measures of Quick Response (QR) is presented.
High quality services and cost minimization are imperative goals for competitive
supply chains [6]. All over the world customers pay more and more attention to the
intangible value of products. Moreover, customers ask that distribution costs should
not negatively impact on the price of the products.
Distribution activities connected to the customer service level and the logistic
costs are, among others, the orders’ management, the inventory and storage man-
agement, the material handling, and the transportation of goods. Related to the
above logistic activities, companies face logistic costs involving: transportation
costs, warehousing and inventory costs, and stock out costs related to impossibility
of completely satisfying the demand.
Companies should redesign the optimal inventories allocation in the distribution
system in such a way to avoid an uncontrolled growth of costs and the presence of
overstocks in the warehouses for maintain enough inventories to satisfy customers’
demand; in particular, as stressed in [5], the fundamental task is to balance the stock
levels at the top and the bottom echelons. In [16] and, more recently, in [14] control
rules for minimizing unbalanced stock levels are proposed. In a recent paper [17]
three different inventory strategies for one manufacturer, one retailer supply chain
with both a traditional channel and a e-channel are compared.
Motivated by the above considerations, in this work we devote our attention to
the integration of inventory and distribution management functions in a
multi-echelon, multi-channel distribution system with the main aim of balancing
stock levels in the whole network. Some real multi echelon distribution systems are
described in [2].
Articles dealing with similar problems generally concern simple networks (i.e.
tree systems with 2 levels or n-echelon serial systems) in which demand points are
usually in the last level of the network. Inventories are often included only in the
facilities operating at the lower level of the network, that is at the peripheral depots.
Many papers dealing with integrated inventory management have as objective
function the minimization of the distribution costs and take as decision variables the
order points of each facility in the network.
A Two Step Procedure for Integrated Inventory … 191

More precisely, we analyze the management of inventories of final goods in a


distribution system where products are available in different supply channels, that is
a traditional channel, in which the products are distributed through depots, and a
direct channel. We describe and compare different inventory management policies
in order to analyse the integration between inventory and distribution management
functions in the network. We present a two phase procedure aiming at integrating in
the same framework inventory and distribution functions thanks to the information
sharing.
A novel contribution of this paper is that it focuses on customer value, while
many supply chain management systems focus on topics like economics quantity
order, and good issues. At least to the authors’ knowledge, only few papers refer to
the maximization of the customer service level. For instance, in [13] the authors
analyze the effect of target service level in supply chains, while in [16] the authors
aim at maintaining a specified customer service level, expressed as in our case as a
percentage of fulfilled demand, by looking for optimal goods allocation and
defining rationing policies. In a recent work [15] a simulation-optimization
approach for solving a 2-echelon inventory problem with service level constraints is
proposed.
Moreover, the present paper focus on the gain that company should derive
finding balanced stock levels in the whole network, that is the gain derived by using
integration network management models and integrated inventory—supply chain
management information systems. This integration is possible only if inventory’s
information related to the whole network are available. Some analysis on the private
and global information in the inventory management are reported in [7, 8]. In [10]
the value of real time information in inventory management is analysed. Finally, in
[12] is stressed the importance of coordination and information sharing in the
systems for control policies and integrated models.
In a logistic system where global information are available, the concept of
inventory generally refer to the echelon stock (that is inventories at the global
system), while if only local information are available commonly a different concept
of inventory is used: the inventory position or installation stock (i.e. inventories at
each stock point). The echelon stock concept was proposed in [9]; this work is
generally considered the first attempt of introducing integration of the inventory
management in distribution networks.
The organization of the remaining of the paper is the following. In Sect. 2 we
describe in details the problem under investigation, focusing on the flow of goods in
the multi-echelon, multi-channel distribution system we are involved with; the main
characteristics of inventory policy used are also described. The proposed two phase
procedure aimed at integrating the inventory and distribution functions is described
in details in Sect. 3, together with the mixed integer linear programming (MILP)
model used for determining a starting solution for the problem under investigation.
Preliminary results referred to a multi-channel distribution network are given in
Sect. 4. Finally, conclusions and outlines for future works are given in Sect. 5.
192 D. Ambrosino and A. Sciomachen

2 Problem Definition: A Multi-echelon, Multi-channel


Distribution System

The supply chain network under investigation is a multi echelon, multi channel
distribution system, in which there is a flow of final products from the plants (where
they are produced) to the demand points, generally called customers. The network
is characterized by the presence of central depots (D), peripheral depots (P) and
suppliers, that in turn are split into clients, that is wholesalers (C), and big clients,
that is distributors or retailers (B). In the following, plants are not included in the
analysis since central depots play the role of supply points of the network.
The following channels for supplying goods are considered:
• a traditional channel, where peripheral depots (supplied by the central ones)
serve customers;
• a direct channel, for serving big clients characterized by large demand, thus
served directly by the central depots.
As an example of such logistic network, let us report in Fig. 1 a simple distri-
bution system with 2 central depots (D), 3 peripheral depots (P), 4 big clients
(B) and a set (C) of other customers. Note that links (arrows) in the network
represent the flow of goods from depots to clients and from central depots to
peripheral ones. Such links are usually predefined but, as we will see in the next
section, we consider the possibility of changing the given flow assignment in order
to get balanced stock levels.
We assume that balanced stock levels imply the same inventory level at each
peripheral depot, for each product, in terms of number of days of stock, while a
higher stock is maintained at the central depots.
Referring to Fig. 1, the central depots (D1 and D2) serve directly the peripheral
depots (P1, P2 and P3) and the big-clients (B1, …B4). Inventories are stocked both
at the central and peripheral depots. The assignment of the peripheral depots and
big clients to the central depots is known, as well as the assignment of the clients to
the peripheral depots.

D1 D2

P1 P2
P3

B1 C B2 B3 B4
C

Fig. 1 The distribution network architecture under investigation


A Two Step Procedure for Integrated Inventory … 193

Having in mind the above distribution system, assuming a time horizon T split
into t homogeneous periods (T = {1, 2, …, t}), and given the customers’ demand for
each time period t, the problem is to determine the optimal flow of goods in the
network and the inventory levels to maintain at each central and peripheral depot
for each time period t 2 T; this implies to decide the emission order time and the
quantity that each depot has to order. The capacity of depots and customer service
level constraints have to be satisfied.
The objective is the minimization of ordering, inventory, stock out and trans-
portation costs.
We assume that the customer service level, that represents one important
parameter for checking the performances of the distribution system, is expressed as
percentage of fulfilled demand.
Moreover, we assume to operate in a centralised control system based on global
information. The centralized control allows changes in the inventory policy by
modifying the flows of goods in the network in order to avoid stock out.
Our inventory policy is based on a periodic (daily) review policy in which goods
are ordered when inventories are under a given level, the so-called ordering point;
the quantity to order is defined for restoring inventories while minimising the
logistic costs and depends on the existing stock in the whole system and, conse-
quently, on the used inventory strategy.
Some stock controls are used for finding the best inventory strategy to use
among a base stock policy, a rationing strategy as proposed in [11] and a basic
stock policy modification as suggested in [3].

3 The Proposed Two Phase Procedure

The proposed two phase algorithm for solving the problem described above is now
presented.
In the first phase, taking into account the logistic network under consideration
and the existing assignment of the peripheral depots (P) and big clients (B) to the
central depots (D), we decompose the problem into |D| sub problems, thus we
define the optimal flows and inventory policy by solving a Mixed Integer Linear
Programming model for each central depot of the network and its sub-network. In
this phase the amount of information available is considered in the definition of the
echelon stock level and echelon inventory position at the central depots.
In the successive phase, denoted “integration” phase, we determine the “current
stock situation” of the whole network, thus identifying the best transferring policy
for managing the flow of goods and granting the higher possible customer service
level. Note that in this phase an information system able to provide to all central
depots of the network real time information related to the exact stock and inventory
position of peripheral depots is a crucial element.
After checking if the inventory and distribution policies obtained by solving the
|D| MILP models are adequate with respect to the overall current stock situation,
194 D. Ambrosino and A. Sciomachen

different instruments for managing the flows of goods in the network and main-
taining balanced stock levels in all depots are used. In particular, first the flows are
defined by solving |D| MILP models (base stock policy), otherwise the current
assignment of peripheral depots (P) and big clients (B) to central depots (D) can be
discussed and, finally different stock policies (i.e. the basic stock policy modifi-
cation and the rationing strategy) can be used.
Let us describe in more detail the two phases of the proposed solution approach.
Note that, when describing the following procedure we will refer to a representative
product; anyway, the model and the other step of the procedure can be extended to
include multi-products (i.e. by defining different stock levels for each product and
so on).

3.1 Phase 1: Definition of Flow of Goods and Inventory


Levels by Using a MILP Model

In this phase, referring to a time horizon T, we define the optimal flows in the
considered network and the inventory level for each stock point (i.e. for each D and P).
Before presenting the model, let us give the required notation.
For each central and peripheral depot j, 8j 2 D [ P, the following quantities are
known:
lj lead time for depot j;
kj capacity of depot j;
sj service level of depot j;
ojt order point of depot j at period t, 8 t 2 T;
coj fixed ordering cost of depot j;
cwj warehousing cost (for unit of inventory and for unit of time) of depot j;
csj stock out cost (for unit of demand and for unit of time) of depot j;
Ij0 the stock level of depot j at the beginning of the time horizon;
Qjtlj the quantity ordered by depot j in the previous |lj| periods of time, with
respect to the beginning of the time horizon.

Moreover, for each period of time t, for each big client and for each peripheral
depot i, 8i 2 B [P; 8 t 2 T, are known:
dit demand of big client/peripheral depot i, in period t;
ctdt transportation cost from central depot d to big client/peripheral depot
i,8 d 2 D;
cdi the assignment of big client/peripheral depot i to central depot d, 8 d 2 D
(i.e. cdi ¼ 1 if i is assigned to central depot d, 0 otherwise).
A Two Step Procedure for Integrated Inventory … 195

The decisions, in each time period t, are related to: the ordered quantity and
stock out of each depot:
Qjt  0 ordered quantity of depot j, in time period t, 8 j 2 D[ P; 8t 2 T;
βdit ≥ 0 stock out of central depot d with respect to big client i, in time period t,
8d 2 D; 8i 2 B; 8t 2 T;
βjt ≥ 0 stock out of peripheral depot j, in time period t, 8 j 2 P; 8 t 2 T.

When a depot orders a positive quantity Qjt > 0, it has to pay a fixed ordering
cost and the following binary decision variables are needed:
(
1 if depot j issues an order in time period t
yjt ¼ 8j 2 D[P; 8t 2 T;
0 otherwise

the stock level and inventory position of each peripheral depot:


Ijt ≥ 0 stock level of peripheral depot j in time period t, 8j 2 P; 8t 2 T;
IPjt ≥ 0 inventory position at peripheral depot j in time period t, 8j 2 P; 8t 2 T;

the echelon stock level and echelon inventory position of each central depot:
Idtech  0 echelon stock level of central depot d in time period t, 8d 2 D; 8t 2 T;
IPechdt  0
echelon inventory position at central depot d in time period t,
8d 2 D; 8t 2 T:

The proposed Integrated Inventory Management (IIM) model can be now given
as follows. Min
X X XX XX XX
coj  yjt þ cwd  Idtech þ cwj  Ijt þ csj  bjt þ
t2T j2D[P t2T d2D t2T j2P t2T j2P
XX X XX XX X ð1Þ
csd  bdit þ ctdj  Qjtlj þ ctdi  ðdit  bdit Þ
t2T d2D i2B t2T j2P t2T d2D i2B

subject to

Myjt  Qjt  0 8j 2 D[P; 8t 2 T ð2Þ


X X
Idt1 þ Qdtld  cdj Qjt  cdj ðdit  bdit Þ ¼ Idt 8j 2 D; 8t 2 T ð3Þ
j2P i2B

Ijt1 þ Qjtlj  ðdjt  bjt Þ ¼ Ijt 8j 2 P; 8t 2 T ð4Þ


196 D. Ambrosino and A. Sciomachen

X
Idt þ cdj ðIjt þ Qjt Þ ¼ Idtech 8j 2 D; 8t 2 T ð5Þ
j2P

X X
Idtech þ Qdt  cdj bjt  cdj bdit ¼ IPech
dt 8j 2 D; 8t 2 T ð6Þ
j2P i2B

Ijt þ Qjt  bjt ¼ IPjt 8j 2 P; 8t 2 T ð7Þ

dt  odt
IPech 8j 2 D; 8t 2 T ð8Þ

IPjt  ojt 8j 2 P; 8t 2 T ð9Þ

Ijt  kj 8j 2 D[P; 8t 2 T ð10Þ

djt  bjt
 sj 8j 2 P; 8t 2 T ð11Þ
djt

ðdit  bdit Þ
 sd 8d 2 D; 8b 2 B : cdj ¼ 1; 8t 2 T ð12Þ
dit

(1) is the objective function of the proposed model minimizing the four main
cost components of our problem that is the ordering, warehousing, stock out and
travelling costs.
Constraints (2) set binary variables yjt to 1 if a positive quantity Qjt is ordered in
a depot of the network. Equations (3) and (4) define, for each time period t, the
stock level of the central and peripheral depots, respectively. Note that the main
difference between (3) and (4) is due to the fact that central depots play a dual rule;
in particular, as it has been already said, central depots have to serve both customers
and peripheral depots.
Equations (5) and (6) define the echelon inventory position of the central depots,
while (7) are related to the inventory position of the peripheral ones.
Constraints (8) and (9) concern the control of the stock level to maintain at each
stock point, and force the inventory echelon position and the inventory position to
be greater or equal than the established order point for central and peripheral depots,
respectively.
(10) are the capacity constraints of the stock points of the network.
Finally, (11) and (12) are the customer service level constraints and impose that
the percentage of satisfied demand of the depots and big clients must be greater or
equal to a given predefined quantity expressed by the service level.
This model implies that information sharing is performed in the network.
Otherwise, if no information are available, in the model only the stock level can be
used instead of echelon stock and echelon inventory position.
A Two Step Procedure for Integrated Inventory … 197

3.2 Phase 2: The Integration Phase

This phase is aimed at verifying whether the solution of model (1)–(12), defining
the optimal inventory allocation in the whole distribution system, is consistent with
the current stock level in the whole network and thus, at defining the best inventory
strategy according to the current global stock level, with the goal of avoiding
unbalanced inventories at the different echelons. In fact, note that even a shortage in
a part of the network may need to modify the optimal inventory allocation in the
whole distribution system.
At the end of phase1 the inventory manager of the network knows the global
amount of goods that have to leave each central depot for serving peripheral depots
and big clients. These quantities represent the out flow of these depots and are the
result of a base stock policy obtained by solving model IIM.
The following main steps describe phase2, called the integration phase.
Step 1: identification of local stock out for each central depot.
If the existing stock level is greater than the out flow, the base stock policy
is used.
Else: the inventory policy is re-determined by solving model IIM for the
whole network, i.e. in the new model the assignment of peripheral depots
and big clients to the central depots is a decision to take (we will refer to
this new model as IIM-A).
At the end of Step 1, the inventory manager of the network knows the
quantities to transfer from each central depot to the peripheral depots and
to big clients assigned to it. These quantities represent the out flow of
central depots and are the result of a base stock policy with new
assignments. The new assignments guarantee a better distribution of
goods in the network.
Step 2: identification of possible global stock out in the network.
If the global amount of inventories existing at the top level of the network
is sufficient to meet the demand of the whole network (total out flow), the
base stock policy with the new assignment is used.
Else: a modification of the basic stock policy [3] is used and a notification
of the existing stock level is sent to the production function.
The basic stock policy modification is obtained by reducing the order point
of each depot, that is by solving the model IIM-A in which a “minimum”
order point (ominjt) (we will refer to this new model as IIM-A(omin))
In this way, the demand of peripheral depots decreases while inventories
are taken at the top echelon of the network.
Step 3: identification of emergency global stock out in the network.
If the global amount of inventory existing at the top level of the network is
enough for satisfying the requirements of the whole network resulting by
the solution of IIM-A(omin), the basic stock policy modification is used.
198 D. Ambrosino and A. Sciomachen

Else a rationing policy [11] is used, that is each peripheral depot with a
positive demand in the time period under investigation will receive a
quantity defined in such a way that each depot has the same days of
coverage (balanced distribution).

4 Results of the Application of the Two Phase Procedure

We use the solution approach described above for solving distribution and inven-
tory problems on different networks and for evaluating new distribution strategies.
The experimental tests are based on a distribution network made up of 2 D, 10 P,
30 B, and 100 C already assigned to P. The time horizon is three weeks split into
time periods of one day. The feasible flows of goods in the network have been
described in Fig. 1. The demand of the customers of the network presents a constant
trend during the considered time horizon, and the demand of big clients
B represents the 20 % of the global demand of the network.
We simulate different scenarios by assuming different initial stock situations and
customers’ demands. In particular, referring to the initial stock situation we con-
sider a standard scenarios (St.S.), in which the initial stock situation is coherent with
the demand of the network, and a critical scenarios (Cr.S.), in which the initial stock
situation is not enough for satisfying the demand of the network.
For the considered scenarios we compare costs and inventory levels obtained by
using two above mentioned concepts of inventory in the MILP model (i.e. the
inventory I. and the echelon stock Ech.).
In Fig. 2 some graphs are reported. These graphs are related to the partition of
the logistic costs in the different cases analyzed. In the last row the total logistic
costs are indicated. These costs are obtained by solving the MILP models.
It can be noted that when referring to the echelon stock concept the warehouses
costs are lower than in the case of the inventory concept is used, while the ordering
costs have an opposite trend.

378000 352500 401000 382500

Fig. 2 Partition of logistic costs in the analysed different scenarios


A Two Step Procedure for Integrated Inventory … 199

Fig. 3 Comparison of
inventory levels at central
depots

Another difference concerns the stock levels maintained at central depots during
the time horizon, as reported in Fig. 3. We noted that, when referring to echelon
stocks, that is in the case of integrated inventory management obtained thanks to
information sharing, costs decrease on average of 7 %.
Referring to the customers’ demand we consider 2 other scenarios that differs for
the % of demand of customers C (i.e. served by P) and of big clients B (i.e. served
directly by D). Starting from a standard initial situation (St.S.), using the echelon
stock concept (Ech.), in Fig. 4 the graphs compare the results obtained in the
following cases: 100 %C–0 %B, 80 %C–20 %B and 60 %C–40 %B. The greater
presence of big clients B in the network involves higher ordering, warehouse costs
for central depots D, while the total ordering and warehouses costs are lower. Also
the total transportation costs decrease.

Fig. 4 Comparison of ordering, inventory, transportation, stock out, costs in case of different
partitions of customers between B and C
200 D. Ambrosino and A. Sciomachen

It is also interesting to note the difference in the distribution of inventories


among layers (i.e. at the central and peripheral depots) due to the presence of
different percentage of big clients with respect to the total demand of the clients of
the network.
Having in mind the dual role played by central depots, we can note that big
clients has a positive effect on inventories taken at the central depots for supplying
the depots at the lower level of the network; consequently, also inventories in the
whole network results lower when an higher number of big clients is present in the
network.
Finally, we evaluate the effect of using different strategies thanks to the infor-
mation sharing on both total logistic cost and stock out costs. The proposed pro-
cedure seems very promising when there is a critical initial situation In fact, for
example when the initial stock situation is critical, fixing equal to 100 % the total
cost obtained by solving model MII (phase1) i.e. by applying a base stock policy,
the 5 % of this cost is related to stock out. At the end of phase2 the total cost is
reduced to 98.3 % with a 0.8 % of stock out costs. In this way we are able to
increase the customer service level and generally the manager can act in such a way
to return to a standard situation in a lower amount of time: in the analyzed cases, on
average, 18–20 days are necessary to came back to a standard situation when a base
stock policy is used, while only 1 week is necessary when a rationing policy is
used. We have noted that costs increase when the rationing policy is used, since this
policy is in favor of the inventory balance for increasing the customers’ fulfilled
demand percentage, whilst transportation is not optimize.

5 Conclusions and Future Research

In this work we have proposed a procedure aimed at integrating inventory and


distribution functions for balancing stock levels in distribution networks; it seems
very promising especially for reducing stock out and faster coming back to a normal
stock situations. The main limit of this work is the assumption of a known demand.
In fact, as generally a certain level of uncertainty characterized the demand (and
other data, i.e. lead time), in the next future we will deeply analyze the capability of
the proposed method of granting high service level in different market conditions,
particularly when there is a greater variability in the demand. The usage of robust
optimization for tackling the uncertainty of the demand will be investigated too.
Finally, different strategies for avoiding inventory unbalance could be addressed,
such as lateral transshipments among facilities operating in the same level of the
network, considering the order point in the proposed model as a decision variable
and including production plans in the decision process.
A Two Step Procedure for Integrated Inventory … 201

References

1. Agatz, N.A.H., Fleischmann, M., van Nunen, J.A.E.E.: E-fulfillment and multi-channel
distribution—a review. Eur. J. Oper. Res. 187, 339–356 (2008)
2. Ambrosino, D., Scutellà, M.G.: Distribution network design: new problems and related
models. Eur. J. Oper. Res. 165, 610–624 (2005)
3. Chen, F.: Optimal policies for multi-echelon inventory problems with batch ordering. Oper.
Res. 48(3), 376–389 (2000)
4. Chiang, W.K., Monahan, G.E.: Managing inventories in a two-echelon dual-channel supply
chain. Eur. J. Oper. Res. 162, 325–341 (2005)
5. Choi, T.-M., Sethi, S.: Innovative quick response programs: a review. Int. J. Prod. Econ. 127
(1), 1–12 (2010)
6. Chopra, S., Meindl, P.: Supply chain management: strategy, planning & operation. Springer
(2007)
7. Chu, C.-L., Leon, V.J.: Single-vendor multi-buyer inventory coordination under private
information. Eur. J. Oper. Res. 191(2), 485–503 (2008)
8. Chu, C.-L., Leon, V.J.: Scalable methodology for supply chain inventory coordination with
private information. Eur. J. Oper. Res. 195(1), 262–279 (2009)
9. Clark, A.J., Scarf, H.: Optimal policies for a multi-echelon inventory problem. Manage. Sci. 6,
475–490 (1960)
10. Dettenbach, M., Thonemann, U. W. (2014). The value of real time yield information in
multi-stage inventory systems—exact and heuristic approaches. Eur. J. Oper. Res. Available
online 30 June 2014
11. Diks, E.B., De Kok, A.G.: Optimal control of a divergent multi-echelon inventory system.
Eur. J. Oper. Res. 111 (1998)
12. Hajji, A., Gharbi, A., Kenne, J.-P., Pellerin, R.: Production control and replenishment strategy
with multiple suppliers. Eur. J. Oper. Res. 208(1), 67–74 (2011)
13. Lee, L.H., Billington, C.: Material management in decentrated supply chains. Oper. Res. 41
(5), 835–848 (1993)
14. Seo, Y., Jung, S., Hahm, J.: Optimal reorder decision utilizing centralized stock information in
a two-echelon distribution system. Comput. Oper. Res. 29, 171–193 (2002)
15. Van der Heijden, M.C.: Supply rationing in multi-echelon divergent systems. Eur. J. Oper.
Res. 101, 532–549 (1997)
16. Verrijdt, J.H.C.M., De Kok, A.G.: Distribution planning for a divergent N-echelon network
without intermediate stock under service restriction. Int. J. Prod. Econ. 38, 225–243 (1995)
17. Yao, D.-Q., Yue, X., Mukhopadhyay, S.K., Wang, Ziping: Strategic inventory deployment for
retail and e-tail stores. Omega 37(3), 646–658 (2009)
Unsupervised Neural Networks
for the Analysis of Business Performance
at Infra-City Level

Renata Paola Dameri, Roberto Garelli and Marina Resta

Abstract The goal of this paper is using Neural Networks (NN) to analyze busi-
ness performance and support small territories development policies. The contri-
bution of the work to the existing literature may be basically summarized as
follows: we are focusing on the application of an unsupervised neural network
(namely: on Self-Organizing Maps—SOM) to discover firms clusters on
micro-territories inside city’s boundaries, and to exploit possible development
policies at local level. Although since early ’90 of the past century NN have been
widely employed to evaluate firms performance, to the best of our knowledge the
use of SOM of that specific task is much less documented. Moreover, the main
novelty of the paper relies on the attention to data at “microscopic” level: data
processing in an infra-city perspective, in fact, has been neglected till now, although
recent studies demonstrate that inequalities in economic and well-being conditions
of people are higher among neighbourhoods of the same city rather than among
different cities or regions. The performance analysis of a large set (7000 environ) of
companies settled in Genova, Italy permits to test our research method and to
design further applications to a large spectrum of territorial surveys regarding both
economic and social well-being conditions.

 
Keywords Neural networks Self organizing maps Knowledge management 
 
Business performance Territorial development Inclusive growth

R.P. Dameri  R. Garelli (&)  M. Resta


DIEC, University of Genova, Genova, Italy
e-mail: rgarelli@economia.unige.it
R.P. Dameri
e-mail: dameri@economia.unige.it
M. Resta
e-mail: resta@economia.unige.it

© Springer International Publishing Switzerland 2016 203


C. Rossignoli et al. (eds.), Organizational Innovation and Change,
Lecture Notes in Information Systems and Organisation 13,
DOI 10.1007/978-3-319-22921-8_16
204 R.P. Dameri et al.

1 Introduction and Theoretical Background

The recent economic crisis has been seriously impacted on the economic and social
well-being of citizens, contributing to increase inequalities among countries, races,
genders, regions and even cities. Several OECD1 indicators (regarding both eco-
nomic and non-economic well-being drivers) show that people have differently
suffered from the economic crisis depending on where they live [1].
An interesting point of view not deeply investigated so far concerns the role
played by micro-territories in influencing citizens’ quality of life and inequalities
[2]. As a matter of fact, territories are now playing a growing role in defining the
development policies, also thanks to the regionalization of EU policies and funding:
regions are the core government body considered by EU in depicting its own
policies. Furthermore, OECD focused the attention on smaller scale, collecting
statistical data on well-being, social and economic development not only at national
but also at infra-national level, hence taking regions, small regions (corresponding
to province or similar) and metropolitan areas into account [3].
However, this analysis efforts could not be enough: several studies demonstrate
that inequality is higher among neighborhoods belonging to the same metropolitan
area than among regions or cities. On the other side, micro-territories often are
crucial to determine the settlement of technological districts or regional clusters. It
is therefore important to further refine the survey scale, analyzing data concerning
smaller areas, because economic and social well-being determinants in city’s
neighborhoods, considerably influence people daily life [4].
Starting from this point, this work aims to develop and test a micro-territorial
dashboard based on neural networks to analyze data, hence supporting the
knowledge of small portions of metropolitan areas, and accordingly addressing
development policies finalized to strength local opportunities and to struggle
against inequality [5].
In order to develop a pilot, we analyzed data regarding business performance in
the Municipality of Genova. Genova is an industrial city and a port in Northern
Italy; it counts 600.000 inhabitants and it is portioned into nine administrative
districts. Our survey wants to investigate the relations among firms performance
and small territories, to discover—whether existent—the reciprocal influence of
positioning economies, territorial development and citizens’ well-being. In this first
application, our focus is on the emergence of firms clusters, that is groups of firms
characterized by similarities in their performance, as the presence of firms with
proper performance profiles seems an important driver of either well or bad-being.
In search of significant patterns of activity, we employed an unsupervised neural
network, namely: Self-Organizing Maps (SOM). The use of SOM in budgeting and
accountancy literature is generally testified by contributions aimed either to dis-
cover patterns of companies with similar strategic positioning in their reference
industry [6], or to control banks exposure to the risk of default [7]. However, in our

1
http://www.oecd.org/.
Unsupervised Neural Networks for the Analysis … 205

case we are interested to SOM’s capability to pull out through an entirely


data-driven process similarities in companies and relationships with
micro-territories that are not a priori theoretically conceptualized, but that stem from
the data processing algorithm. To the best of our knowledge, the application of
SOM to exploit firms patterns at such microscopic level is rather unexplored.
In accordance to what stated in previous rows, this paper is organized as follows.
Section 2 describes the neural algorithm in use. Section 3 presents the case study
concerning a sample set of 7703 firms variously distributed in the metropolitan area
of Genova, Italy. The results obtained from both traditional performance analysis
and SOM are then discussed. Section 4 concludes.

2 Methodology

Artificial Neural Networks (ANN) have features that make them appealing to both
connectionist researchers and individuals needing ways to solve complex problems,
thanks to their ability to facilitate the handling of large amounts of data [8]. The
reason for this is due to the fact that each node in a neural network is essentially on
its own an autonomous entity: each performs only a small computation in the
grand-scheme of the problem. The aggregate of all these nodes, the entire network,
is where the true capability lies.
Before the ANN can become useful for information retrieval, it must learn about
the information at hand. In general, there are three flavors of learning [9]: super-
vised, reinforcement and unsupervised learning. In the former case the training data
consist of a set of examples; each example in turn is a pair made up of an input
object (typically a vector) and a desired output value (also called the supervisory
signal). The data at disposal are then used to produce an inferred function, which
can be used for mapping new examples. The accuracy of the learned function is
controlled by monitoring the error (the bias) between the estimated and the desired
output. Typically the procedure ends when all the example pairs have been
examined and the error has been iteratively reduced at values very close to zero. In
reinforcement learning [10], on the other hand, the algorithmic machine interacts
with the input environment by producing actions a1, a2, … These actions affect the
state of the environment, which in turn results in the machine receiving some scalar
rewards (or punishments) r1, r2, …. The goal is to learn acting in a way that either
maximizes the future rewards the algorithm receives, or minimizes the punishments
over its lifetime. Finally, in unsupervised learning the machine simply receives
inputs x1, x2, . . ., but obtains neither supervised target outputs, nor rewards from its
environment: the network is simply asked to try on its own to discover patterns in
the input data. It may seem somewhat mysterious to imagine what the machine
could possibly learn, given that it doesn’t get any feedback from its environment.
However, it is possible to develop a formal framework for unsupervised learning
based on the notion that the main goal of the procedure is to find hidden structure in
the data, hence summarizing and explaining their key features. Kohonen’s
206 R.P. Dameri et al.

Self-Organizing Map—SOM—[11] is one of the most popular neural network


architecture based on unsupervised learning.
From the technical viewpoint, SOM projects data from the original higher
dimensional space into a lower dimensional one (usually bi-dimensional), main-
taining unchanged existing topological relationships. This can be particularly useful
when the analyst is asked to examine complex dataset to highlight their intrinsic
features. The way SOMs work may be easily described. Consider first a finite set
X = {x(t)}t =1,…,T of r-dimensions input data items:

xðtÞ ¼ ½x1 x2 . . .xr : ð1Þ

Besides, let us assume that M is the m × k bi-dimensional projection grid whose


elements (units, nodes) are arranged into m rows and k columns: each unit of M is
associated with an array wij (i = 1,…m, j = 1,…,k) whose number of components is
the same as that of input data (i.e.: wij has r elements).
After initializing the map nodes at random, the Kohonen algorithm in its sim-
plest (and more commonly used) version assumes to iteratively modify the map
nodes by the following rules. For each t = 1,…T, we will have:
  
wij ðtÞ ¼ min d xðtÞ; wij ðtÞ ð2Þ
wij ðtÞ2M

and:
h i 
wij ðt þ 1Þ ¼ wij ðtÞ þ aðtÞh t; xðtÞ; wij ðtÞ xðtÞ  wij ðtÞ : ð3Þ
 
where d xðtÞ; wij ðtÞ is the function that computes the distance among input pat-
terns and each node in M. Despite very sophisticate functions may be used, the
most common choice is the Euclidean distance:

  qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
 0  ffi
dE xðtÞ; wij ðtÞ ¼ xðtÞ  wij ðtÞ  xðtÞ  wij ðtÞ ð4Þ

the symbol ‘being the standard notation for transposition. The node satisfying (2) is
called winner or Best Matching Unit (BMU). The notation aðtÞ indicates a scalar
decreasing factor in the range (0,1) depending on time t that defines the size of the
correction: starting from values closer to one (that means maximum correction), as
time goes on aðtÞ decreases to values nearer to zero (no correction at all). Finally,
h is the neighborhood function; it models the distance between map nodes and
BMU. Function h may assume various shapes, but here we refer to the simplest one:
  P
h pBMU ; pwij ; t ¼ et jpBMU pwij j ð5Þ

where pBMU ; pwij are the grid coordinates of both BMU and generic map nodes
respectively.
Unsupervised Neural Networks for the Analysis … 207

Fig. 1 A sample SOM

The goodness of the SOM representation of the input space can be evaluated by
several error measures [12]. Here we considered the Topographic Error (TE). TE is
the most simple of the topology preservation measures, working as follows: for all
data samples, the respective best and second-best matching units are determined. If
these are not adjacent on the map lattice, this is considered an error. The total error
is then normalized to a range from 0 to 1, where 0 means perfect topology pres-
ervation. The learning procedure is henceforth arrested when the TE is reasonably
close to zero.
Apart from theoretical considerations, the beauty of SOM is that it offers a nice
tool to project high dimensional input data into a two-dimensions lattice, according
to the principle that similar inputs are mapped into neighbor nodes. Consider for
instance Fig. 1, where a SOM map is shown, as the result of projecting onto the
neural space 4-dimensions input samples.
This figure uses the following coding: hexagons represent the neurons, the colors
indicate the distances between neurons; different tones of red refer to largest dis-
tances, while blue and lighter colors represent smaller distances. According to the
colors division, the network has clustered the data into three main groups. The color
difference indicates that data points in these regions are farther apart. The inter-
pretation of the results may be given at various level of detail. An example is the
study of how much the input components can affect the overall representation: this
information can be visually studied by examining the SOM weight planes, that is by
visualizing neurons coloring per single input component. Figure 2 offers a repre-
sentation of the four weights planes obtained from the map in Fig. 1.
In this way the analyst has the possibility to study both the organization of the
input space provided by the overall SOM (as in Fig. 1), and the impact of each
component into the overall structure of data (as in Fig. 2), hence deriving some
important pieces of information concerning the intrinsic features of the dataset.
208 R.P. Dameri et al.

Fig. 2 Component planes in a sample SOM

3 Case Study: Companies in Genova

The subject database for this study consisted in a sample of 7719 companies (cut off
date 31/3/2014) with registered offices in the Municipality of Genova (Smart city in
the Northern part of Italy). The extrapolation of the data uses the AIDA2 data bank.
Starting from the original data sample, we eliminated 16 firms because of lack of
relevant data, hence having at disposal the final data sample, made up by 7703
companies. All the companies are in the legal form of either limited companies or
cooperatives, with balance sheets regularly settled in the year 2012. This dataset
was employed to run both “conventional” performance analysis [13] and neural
networks, as we are going to describe in next subsections.

3.1 Performance Analysis: A Traditional Approach

Our sample of companies was grouped according to several criteria, including: the
legal form, the geographic position within Genova, and the merchandise category.
The overall picture highlights the following situation:
• from the legal point of view, 95 % of the examined companies are limited
companies, the remaining 5 % is made up by cooperative companies (25 % of
whom are social cooperatives);
• from the geographical point of view, 61 % of the whole sample is mainly
localised in the central district (city centre); the remaining 39 % is almost evenly
distributed in the other 8 districts.

2
AIDA stands for: Analisi Informatizzata delle Aziende. It is a database provided by Bureau van
Dijk s.p.a (http://www.bvdinfo.com/it-it/home), giving information (mainly) about the balance
sheet of Italian companies.
Unsupervised Neural Networks for the Analysis … 209

• looking at the merchandise category, we labelled the companies according to the


ATECO3 codes employed by ISTAT (Italian National Institute of Statistics); in
this context, grouping the companies in 4 macro-groups, we have: the agri-
cultural, livestock and mining activities (codes from 01 to 09) representing only
1 % of the available companies; the manufacturing sector (codes from 10 to 33)
that represent 9 % of the sample, activities for the production and distribution of
energy, water and waste disposal and construction activities (codes from 35 to
44) incorporating 13 % of our dataset; the remaining 78 % of the sample belongs
to the services sector (codes from 45 to 99).
In order to evaluate the positioning of the companies in the Genovese context we
run a “conventional” analysis by way of the indicators we are now going to enu-
merate: the ratio fixed asset/total asset (F/K), to evaluate the weight of investments;
the ratio net capital/total asset (N/K) to explore the equity situation; the number of
employees; the return on equities (ROE), the return of asset (ROA) and value added
(VA), to investigate the profitability, the ratios: net salaries/value added and
amortization/value added, to evaluate the distribution policies for the values gen-
erated. Finally, in order to (at least approximately) understand the level of com-
panies productivity, the indicator: value of production/no. of employees is used. At
this point it is necessary to keep in mind that many of the examined businesses
cannot have employees, being single-partner companies or likewise, and as such
they are therefore separately treated, given that the added value will be partly
distributed between the owners.
Clearly, one could employ either different or more articulated indicators; how-
ever, we strongly believe that the presented framework can provide useful indi-
cators to understand the current situation of businesses in the town area of Genova.
Table 1 reports a summary for the indicators set, showing the relative frequency
of companies in four non-overlapping classes of values. The analysis is performed
for both the overall sample (Tot.) and for the companies divided according to their
legal form (Lim.co and Coop.). Similar information are also provided in Tables 2
and 3. In particular, Table 2 replicates the analysis performed in Table 1, focusing
on the companies located in the central district (the one with greater companies
concentration in Genova), while Table 3 does the same for the companies whose
ATECO codes represent the majority (78 %) of the examined sample.
The performance analysis revealed some interesting evidences. As first remark,
the majority of the examined sample shows low weight of investments; in the
meantime those companies have a high level of debt. Probably this depends on the
small dimensions of the companies. Furthermore, both return on equity (ROE) and
return on investments (ROI) show very low levels, thus indicating the low capa-
bility of companies to produce value with their assets. Finally, in our sample it was
not always easy to find data regarding employees, and a lot of companies exhibits

3
ATECO is the abbreviation for Attività Economiche, and it is the Italian conversion, made by
ISTAT to fit the Italian situation, of the Eurostat classification for Economic Activities. See: http://
www.istat.it/it/strumenti/definizioni-e-classificazioni.
210 R.P. Dameri et al.

Table 1 Indicators frequency distribution: an analysis based on the legal form of the Genovese
companies
F/K – >80 % 80–60 % 60–40 % 40–20 % <20 % –
Tot. 1118 685 730 1159 4013 –
Lim.co 1099 664 690 1097 3798 –
Coop 19 21 40 60 215 –
N/K – >66 % 66–50 % 50–40 % 40–25 % <25 % –
Tot. 1210 657 552 1090 4194 –
Lim.co 1181 642 532 1044 3949 –
Coop 29 15 20 46 245 –
ROE – >100 % 100–50 % 50–20 % 20–10 % <10 % –
Tot. 366 512 1058 825 4942 –
Lim.co. 323 486 1013 803 4723 –
Coop 43 26 45 22 219 –
ROA – >100 % 100–50 % 50–20 % 20–10 % <10 % –
Tot. 5 114 487 765 6332 –
Lim.co 5 110 474 738 6021 –
Coop 0 4 13 27 311 –
Employees – >100 100–20 20–5 5–1 0 n.d
Tot. 105 523 1854 2234 2807 180
Lim.co 81 470 1761 2139 2726 171
Coop 24 53 93 95 81 9
Wages/VA with VA > 0 – >50 % 50–20 % 20–10 % 10–5 % <5 % n.d.
Tot. 2810 1313 183 72 1190 6
Lim.co 2582 1291 182 67 1881 5
Coop 228 22 1 5 38 1
Amort./VA with VA > 0 – >50 % 50–20 % 20–10 % 10–5 % <5 % n.d.
Tot. 743 1005 1038 1178 2339 0
Lim.co 725 984 1000 1144 2155 0
Coop 18 21 38 34 184 0
VP/emp. with VP > 0 – >1 mln 1–0.4 0.4–0.2 0.2–0.05 0.05–0 n.d
Tot. 226 529 839 2361 758 2984
Lim.co 219 522 827 2290 590 490
Coop 7 7 12 71 168 67

no labor force, probably employing only their partners. It is therefore difficult to


assess the companies’ productivity.
Generally speaking, this kind of analysis only allows to study average values,
therefore neglecting marginal situations, and flatting the results produced by the
companies. Moreover, the analysis is conducted assuming an “a priori” companies
clustering, therefore arguing that companies with the same legal form, or sharing
similar city location, or merchandise sector necessarily should have similar
performance.
Unsupervised Neural Networks for the Analysis … 211

Table 2 Indicators frequency distribution: situation of 4.640 companies in central district of


Genova
F/K >80 % 80–60 % 60–40 % 40–20 % <20 % –
797 437 382 638 2386 –
N/K >66 % 66–50 % 50–40 % 40–25 % <25 % –
829 387 324 649 2451 –
ROE >100 % 100–50 % 50–20 % 20–10 % <10 % –
205 315 619 470 3031 –
ROA >100 % 100–50 % 50–20 % 20–10 % <10 % –
4 65 294 427 3850 –
Employees >100 100–20 20–5 5–1 0 n.d.
60 279 959 1338 1898 106
Wages/VA with VA > 0 >50 % 50–20 % 20–10 % 10–5 % <5 % n.d.
1539 730 113 44 1261 2
Amort./VA with VA > 0 >50 % 50–20 % 20–10 % 10–5 % <5 % n.d.
488 581 579 643 1398 0
VP/emp. with VP > 0 >1 mln 1–0.4 0.4–0.2 0.2–0.05 0.05–0 n.d
140 311 446 1320 417 1639

To overcome these limitation of the survey, we also performed an analysis based


on neural networks (NN), whose results are discussed in the next paragraph.

3.2 Applying NN to the Analysis of Business Performance


in the Area of Genova

In this section we illustrate how to use SOM to obtain results with high visual
impact and robust significance from a technical viewpoint, overcoming the limits of
the traditional performance analysis listed in the above.
Our dataset consists in a 7703 × 14 input matrix, where each row represents a
firm settled in Genova, while each column is composed by the indicators already
introduced in Sect. 3.1 (with the exception of the Employees number and of the
ratio AMM/VA) to whom we added seven indicators anew: revenues, value added,
wages, amortization, EBIT, interests on debt, taxes, and net profit. In this respect,
this means adding complexity to our analysis. Nevertheless, being our aim to
explore the intrinsic nature of data, i.e. their hidden features, we think that in this
way it is possible to offer a more complete picture of the situation for the companies
located in the area of Genova.
Before running SOM, data of each column was pre-processed and rescaled
according to the formula: cmin
rangec , where c is the column data, minc the minimum
c

observed in the column, and rangec is the column range of values. We then tested
212 R.P. Dameri et al.

Table 3 Indicators frequency distribution: situation of companies in the ATECO sectors 10–33;
35–44; 45–99
F/K – >80 % 80–60 % 60–40 % 40–20 % <20 % –
10–33 37 58 95 150 343 –
35–44 94 49 60 154 628 –
45–99 976 573 570 852 3021 –
N/K – >66 % 66–50 % 50–40 % 40–25 % <25 % –
10–33 70 68 64 123 358 –
35–44 121 71 60 126 607 –
45–99 1009 516 427 836 3204 –
ROE – >100 % 100–50 % 50–20 % 20–10 % <10 % –
10–33 34 28 103 73 445 –
35–44 41 74 129 95 646 –
45–99 285 408 824 655 3820 –
ROA – >100 % 100–50 % 50–20 % 20–10 % <10 % –
10–33 0 8 38 68 569 –
35–44 1 10 62 96 816 –
45–99 4 96 386 600 4906 –
Employees – >100 100–20 20–5 5–1 0 n.d.
10–33 19 112 283 172 87 10
35–44 11 53 267 241 399 14
45–99 74 357 1293 1808 2304 156
Wages/VA with VA > 0 – >50 % 50–20 % 20–10 % 10–5 % <5 % n.d.
10–33 357 170 17 3 70 0
35–44 320 177 26 10 232 0
45–99 2116 961 139 59 1612 6
Amort./VA with VA > 0 – >50 % 50–20 % 20–10 % 10–5 % <5 % n.d.
10–33 39 88 147 149 194 0
35–44 66 99 85 152 363 0
45–99 628 815 801 873 1776 0
VP/emp. with VP > 0 – >1 mln 1–0.4 0.4–0.2 0.2–0.05 0.05–0 n.d
10–33 17 61 109 327 72 80
35–44 21 32 76 355 87 317
45–99 188 435 649 1671 587 2070

different map dimensions, in search for those assuring the best topographic error
(TE). In this respect, we are now going to discuss the results obtained with a
20 × 20 SOM, that led TE below the value 0.0002 (very close to zero).
Figure 3 shows the overall SOM we obtained.
Assuming the coding conventions already described in Sect. 2, we observe that
there is a pattern of neurons having a very similar color, varying from deep blue to
lighter blue: this means that we have a very high number of companies sharing
Unsupervised Neural Networks for the Analysis … 213

Fig. 3 The overall 20 × 20 SOM

Fig. 4 SOM components planes


214 R.P. Dameri et al.

similar performances that position them in the low range of the performance scale.
We can also note that there are three smaller areas whose color (yellow-orange-red),
let us to think they can be accounted for medium/high performance values.
In order to better understand the performance determinants and the role of each
performance indicator, we now display (Fig. 4) the SOM components (indicators)
planes. We can observe that although from Fig. 3 the general performance level of
many companies was very similar, the same does not apply to all the performance
indicators which show different values and hence lead to a more significant com-
panies clustering.

4 Conclusion: Findings, Research Limits and Further


Works

In this paper we presented a dashboard using unsupervised neural networks (NN) to


analyze business performance at infra-city level. In our intention this should be the
starting point to extend the use of unsupervised NN (namely Self-Organizing
Maps-SOM) to examine relevant datasets including not only business performance,
but also social, demographic and more general well-being indicators at infra-city
level, to discover the situation of higher disease, and face them with appropriate
local policies. Indeed, micro-territories are an important scale to detect threats and
opportunities, and to apply specific policies to support local economies and to
address specific neighborhood needs. However, so far nor scientific researches nor
institutional surveys by national or supranational organizations such as OECD,
ONU, EU have been applied on micro-level.
The most interesting findings deriving from our pilot case are the capability of
SOM to process a very large set of heterogeneous data, not necessarily linked by
preconceived relations, but discovering hidden structures in the data.
At the present stage, the main limitation of the results we have obtained derives
from the difficulty to recover those data at the “microscopic” level which have taken
into consideration. Indeed, it was not possible at present to link companies’
financial data to social and demographic data referring to the area of Genova, that at
present are not available; however, we are confident that latest population census
data will be able to provide us with all the required information.
Further research efforts will therefore include the design of a comprehensive
dataset, merging information regarding different aspects of citizens’ well-being in
infra-city territories. In order to choose the best well-being indicators, a possible
solution could be that of referring to the most recognized international models, such
as those in the already cited OECD well-being measurement framework. The use of
NN to explore a so large and heterogeneous dataset will be useful to cluster ter-
ritories and their weaknesses to support well-addressed territorial policies.
Unsupervised Neural Networks for the Analysis … 215

References

1. OECD: OECD Regions at a Glance. OECD Publishing, Paris (2013)


2. OECD: Regions and cities: where policies and people meet. In: Fifth Roundtable of Mayors
and Ministers, Marseille, France, 5–6 Dec 2013
3. OECD: How is life? 2013. Measuring well-being. OECD Publishing, Paris (2013)
4. Abraham, M.: Data from the block: inclusive growth requires better neighborhood-level
information. In: Changing the Conversation on Growth, Second OECD/Ford Foundation
Workshop, New York, 27 Feb 2014
5. Ferro, E., Sorrentino, M.: Can intermunicipal collaboration help the diffusion of e-government
in peripheral areas? Evidence from Italy. Government Information Quarterly 27(1), 17–25
(2010)
6. Budayan, C., Dikmen, I, Birgonul, T.: Strategic group analysis by using self organizing maps.
In: Boyd, D. (ed.) Proceedings of 23rd Annual ARCOM Conference, Belfast, UK, Association
of Researchers in Construction Management, pp. 223–232, 3–5 Sept 2007
7. Serrano-Cinca, C.: From financial information to strategic groups: a self-organizing neural
network approach. J. Forecast. 17(1), 415–428 (1998)
8. Noyes, J.L.: Artificial Intelligence with Common Lisp: Fundamentals of Symbolic and
Numeric Processing. D.C. Heath, Lexington, MA (1992)
9. Mohri, M., Rostamizadeh, A., Talwalkar, A.: Foundations of Machine Learning, The MIT
Press, Cambridge (2012). ISBN 9780262018258
10. Kaelbling, L.P., Littman, M.L., Moore, A.W.: Reinforcement learning: a survey. J. Artif.
Intell. Res. 4, 237–285 (1996)
11. Kohonen, T.: Self-organized formation of topologically correct feature maps. Biol. Cybern. 43
(1), 59–69 (1982)
12. Villmann, T., Der, R., Herrmann, M., Martinetz, T.M.: Topology preservation in
self-organizing feature maps: exact definition and measurement. IEEE Trans. Neural Netw.
8(2), 256–266 (1997)
13. Subramanyam, R., Wild, J.J.: Financial Statement Analysis, 11th edn. McGraw-Hill/Irwin,
New York (2013). ISBN-10:0078110963
Design of Pre-emptive Customer
Experience Management Systems
for Mobile Broadband Communications
Service Providers

Daniel Delibes Rodriguez and Penny Hart

Abstract Pre-emptive Customer Experience Management Systems are designed to


find out automatically and proactively the problems of the end-users inside MBB
CSPs. They analyze their behavior and find end-user related problems before they
realize them by themselves. They make end-users feel more satisfied with CSP’s
services. On the other hand, CSPs benefit from the satisfaction that these systems
are providing to their end-users because they increase their loyalty. This paper
explores the concept of the design and the implementation of those systems, what
are their benefits for mobile broadband communication providers and how they
should be designed and implemented.

Keywords Mobile broadband CEM  


Customer experience  Loyalty 
 
Retention Churn Quality of End-User experience

1 Introduction

The importance of Customer Experience Management (CEM) systems is growing


inside mobile broadband operators. CEM is not only a technology system but rather
a whole business concept. The main motivation is to improve the quality of the
services offered to their end-customers. CEM became a need for CSPs, in mature
markets, when the MBB industry reached the saturation level and CSPs needed to
retain their customers by reduce churn and increase their loyalty [1, 2]. That is why
they are moving from a network centric performance management towards a more
customer centric approach. This transformation increases customer satisfaction,

D. Delibes Rodriguez (&)  P. Hart


School of Computing, University of Portsmouth, Portsmouth, UK
e-mail: daniel.delibesrodriguez@myport.ac.uk
P. Hart
e-mail: penny.hart@port.ac.uk

© Springer International Publishing Switzerland 2016 217


C. Rossignoli et al. (eds.), Organizational Innovation and Change,
Lecture Notes in Information Systems and Organisation 13,
DOI 10.1007/978-3-319-22921-8_17
218 D. Delibes Rodriguez and P. Hart

loyalty, retention and reduces customer attrition (churn). Therefore, this is a key
element for them to improve their business and profitability.
There are several types of CEM systems, some are based on Voice of the
Customer (VoC). Those systems collect the feedback from operator’s
end-customers through direct and indirect questioning. However those systems
present two main drawbacks: One is that this is too slow. CSPs need to wait to
know the opinion of their customers. The other one is that they do not consider the
whole end-user population but a group of it.
Pre-emptive Customer Experience Management systems collect the Customer
Experience data through network experience survey data gathered from the oper-
ator’s network and IT systems rather than directly from the end-users. This method
achieves greater acceleration of time and efficiency as it does not need to wait for
the customer’s feedback and it takes into account the totality of the end-user’s data.
Because of it, the operators benefit from an increased customer satisfaction rates, a
higher customer loyalty and therefore lower churn.

2 Background

Customer Experience, as a definition, is the sum of all experiences and perceptions,


both conscious and subconscious, that a customer has with a supplier of goods
and/or services, over the duration of their relationship with that supplier, with its
brand resulting from all their interactions with its customers during the customer
life cycle. It is the internal and subjective response that customers have to any direct
or indirect contact with a company [3, 4].
Today, connected consumers want on-the-go anywhere access to advanced
applications and rich multimedia services. Many of them are Smartphone users and
are using more and more Over-The-Top content (OTT). Due to that, CEM becomes
increasingly important for Mobile CSPs, especially in mature markets. The main
reasons are economical:
• Mobile broadband is becoming a commodity
• The existing mobile network infrastructures are becoming more and more
complex (i.e.: with the introduction of LTE technologies)
All these factors drive mobile CSPs to find other ways to increase their revenues
and therefore they are increasingly focused on enhancing their Customer’s
Experience in order to retain the subscriber base and raise their ARPU [5–8]. This is
why mobile CSPs are becoming more Customer-Centric.
From a technology perspective, other types of broadband devices including
Smartphones, tablets, wireless routers, etc. are gaining popularity. The growth in
mobile data services creates the need for solutions to help manage all this com-
plexity [9]. Some evidence from the market show that successful 4G CSPs
Design of Pre-emptive Customer Experience Management Systems … 219

approach LTE as a platform to enhance Customer Experience rather than to


generate new revenues by charging more for access to the network [10].
Mobile CSPs do not want to lose customers not only because of the conse-
quences of losing market share but also because it is more expensive to acquire new
customers than to retain existing ones [1, 2, 11].
As competition increased and operator’s services became more complex, the
End-Users are focusing more on quality. This is the reason why they are now more
likely to churn [7]. Smartphone users are the ones with the highest churn probability
[9, 12, 13].
In Mobile CSPs, existing CEM metrics like Net Promoter Score (NPS) and the
traditional network Key Performance Indicators (KPIs) in CPS’s Network
Management Systems (NMS) cannot measure the E2E Customer Experience,
therefore CEM are new systems developed by operators to accurate determine,
track and monitor the End-User experience [5].
In order to measure the customer experience, some CSPs are grouping network
KPIs and mapping them to KQIs in order to provide a better view of the actual user
experience [9]. This is called QoE modelling. This is a Top-Down, QoE, KQI, and
KPI evaluation model, which allows Mobile CSPs to identify quickly end-user
performance impairments and do fast troubleshooting. QoE modelling needs to be
developed incorporating a customer experience with an aggregated Per-Service,
Per-User orientation (PSPU) [14–16].
Mobile CSPs require a more aggressive type of monitoring systems, than the
traditional ones, to accurately determine and manage the customer experience
and to be able to react faster to network problems or poor customer experience
[11, 12, 17, 18].
CEM Pre-emptive systems should be designed so that the issues that are
affecting negatively to the End-Users can be corrected proactively, sometimes even
before the customer realizes there is a problem [12].
At the moment only few Mobile CSPs have implemented CEM systems or even
they lack of a proper strategy in place to implement a CEM system [19].
CEM is a holistic approach with business objectives to lead systems and process
integration in a customer-centric model and success can only be assured with these
all inclusive approaches. There is no a standard CEM solution valid for all
CSPs. Each implementation strategy has to be a customized solution for each CSP
[5, 8, 13, 20, 21].

3 Methodology and Primary Data

This study used qualitative research methods to study the different industry cases and
developments, expert interviews and comparing those results with the conceptual
framework of CEM.
220 D. Delibes Rodriguez and P. Hart

Research Design
The reality is that at the moment only few Mobile CSPs have implemented CEM
systems or even they lack of a proper strategy in place to implement a CEM.
Customer Experience Management systems are in continuous evolution. There are
different design and implementation approaches. Besides, those systems need to be
integrated within the existing CSP’s networks and information systems. Therefore
this requires a high level of customization. Due to the small number of imple-
mentations, the wide variety of approaches and complex customizations, the
competence in this area falls into a very small number of experts.
Research Questions
The main research questions of the study were the following:
• What are CEM pre-emptive systems? How Mobile CSPs can benefit from CEM
systems?
• What are the key differences between pre-emptive CEM systems and other types
of non-pre-emptive CEM systems like VoC systems? What are the advantages
and disadvantages of pre-emptive CEM systems compared to other type of
systems?
• What are the processes, methodologies and the metrics to take into account for
the design of pre-emptive Customer Experience Management systems? What
are the main design challenges of these systems?
• How is CEM introduced in CSPs organizations?
Design Components
The data used in this study was mainly collected from case studies and interviews
with experts. Those case studies and expert’s interviews were coming from the ICT
industry, i.e.: companies providing services to CSPs, system suppliers, telecom-
munications vendors etc.
Sampling:
As this is a qualitative research study with focus on expert’s interviews, the
amount of interviews and case studies was very limited, therefore the use of
sampling had not much significance here. Data was collected from the selected
interviewees and related case studies. Those case studies and groups of people were
coming from:
• Companies providing services to CSPs (i.e.: consulting services)
• System suppliers (i.e.: companies that develop and supply monitoring systems
for CEM)
• Telecommunications vendors (i.e.: companies that develop Mobile CSPs tele-
communications infrastructure and also monitoring systems)
The groups of people, cases etc. were selected from the same industry area and
focus (criterion sampling) and in order to be able to make a comparable case
selection. Another criterion for the selection of the groups of people and persons for
Design of Pre-emptive Customer Experience Management Systems … 221

the interviews was the persons or group of persons that could provide the best and
richest information (intensity sampling).
Intended comparison and generalization level:
In this particular case, as this is a qualitative research study, the generalization
level here was very low.
Data Collection
As it was highlighted previously, experts in this field are scarce and solutions are
not standardized. That is why qualitative research has been selected, in this case, as
the research method for this study, because the goal is to investigate a phenomenon
that is not quantifiable but it is needed to gather an in-depth understanding of
human behavior and the reasons that govern such behavior.
Because of the type of research, unstructured questions were used here because
we wanted the interviewee to provide the own views on the topic being researched.
In this case, there was no questionnaire. However, a framework on the topic of
research was defined where the interviewee was allowed a considerable freedom to
move. The predominant type of questions here were verbal and open ones.
As experts in this field are spread over the world, data collection within this
study was done in form of telephone interviews or video conference (i.e. using
Skype). Recordings were taken in order to avoid forgetting the answers, comments
etc. Also notes were taken, as part of the observation to collect other relevant
information such as attitude of the interviewee during the interview, confidence in
the subject and when answering the questions, commitment, openness, doubts etc.
Data coming from documents was used to enhance the data collected via
inter-views. In addition, the information contained in documents might fail to
address exactly the topic of the research. Interviews are time consuming and if data
is not critically analyzed, it might produce inaccurate results i.e. bias in interviews
or irrelevant questions in questionnaire.
Analysis
In order to be able to categorize the different items, a coding system was developed
to “label”, classify and compare the data and the answers from the interviews.
Triangulation was used in the sense that not only the data from the interviews was
used, but also the notes taken during those interviews, the data from case studies,
observations etc. This data was also taken apart, coded and categorized with the aim
of establishing comparisons. The coding system was established using a hierar-
chical structure based on the hierarchical schema of questions for the interview. To
analyze all the data collected, the answers from each interviewee were mapped with
its corresponding question and each question was mapped to one of the research
questions defined at the beginning of this research study. That was all put on a table
structure in excel and with the help of pivot tables it was easy to compare the
answers from each of the interviewees for the same questions and to observe the
differences (Table 1).
222 D. Delibes Rodriguez and P. Hart

Table 1 Example of question and answer labeling and comparison


Question (Q3_4): What is CSP‘s satisfaction level of CEM systems after the implementation?
I1 They realize when they are so keen or smart to have a really good DB to monitor and to
be able to compare…
I2 They are very happy if the system is implemented properly
I3 Most operators are satisfied but a number of operators they do not know what we can do
with the systems after the implementation…
I4 Here is the cornerstone question. The point is really that you need to show constantly the
value, because the value is evident to you but not to the others…
I5 So there is a difference of what they were the early approach of OSS approaches to
expand into quality…
I6 The toughest part is to make CSPs to start using it after implemented…
I7 Basically they move from being completely upset or worried about the money to be
happy because they start to get the information the requested before

4 Discussion

4.1 Benefits of Pre-emptive CEM Systems for CSPs


and Differences Compared to Other Types of CEM
Systems

In order for CSPs to become Customer-Centric they need to understand their


end-users and for that, they need to collect as much information as possible from
them. CSPs need automatic processes to collect the feedback from the end-users.
There are already other types of CEM systems in the market based on VoC.
However, those systems are insufficient because they are two slow in collecting the
feedback from the end-users and because they are part of a reactive process of
complain management instead of a pro-active one. In a customer centric organi-
zation, speed is vital and the processes need to be proactive and preventive rather
than reactive. It is essential even to predict the problems and to react before even the
end-users they realize about them.
CSP’s end-users are not connecting through only one type of device but through
different types like Smartphones, tablets, etc. This all creates lots of different
channels for the same end-user and CSPs need to take them into account when we
design the CEM systems.

4.2 Design of Pre-emptive CEM Systems and Their Main


Challenges

Those systems have to collect the data from all the end-users and their services in
real time and they have to be able to process all this data, as well, in real time. CSPs
Design of Pre-emptive Customer Experience Management Systems … 223

need to analyze not only the performance data for all the subscribers, services and
terminals but also behavioral data. Another big difference of pre-emptive systems
compared to other systems like VoC, is that those systems collect a lot of data from
each subscriber directly from the systems and network components of the CSP. The
data extracted from those systems is objective, quantifiable and therefore measur-
able. With this data, coming from each subscriber, it can be possible to categorize it,
make user profiles, groups according to different criteria and be able to and
benchmark it.
Pre-emptive CEM systems generate a huge amount of data that needs to be
stored and post-processed. Therefore they require powerful architectures with Big
data components that are able to collect in real time hundreds of gigabits per second
and storage several hundreds of terabytes of information. Without a flexible
architecture based on clouds or scalable architectures based on blades that could
pose a problem for certain CSPs to adopt CEM due to lack of space availability.
The other big challenge in the design of those systems, a part of the huge
amounts of data generated and the huge amount of physical storage needed is from
where and how to collect the end-user related data. In traditional network systems
the end-user related data was not extracted because it was not important.
Traditionally network systems were extracting data for network performance
purposes rather than end-user experience purposes. The different is big because
traditional systems were aggregating the network data and now each end-user
session and its content is important. Therefore, there is here the need of special
systems that extract such data directly from the end-users. There are many systems
in an operator that collects user related data like CRM, Billing systems etc.
However, the detailed level of such systems is not high enough. Those systems are
still important and can be integrated to the whole CEM system but it is still needed
the content of the sessions from the end-users, location related information, etc.
There are several ways to gather and collect this data but the most common ones are
the network probes as they collect the end-user related information from the net-
work interfaces of the CSP. The closer they are to the end-user, the most accurate
and detailed is the information provided by those systems. Therefore, the best
source of information is the end-user terminal itself. These are called end-user
“agents” and they are software components installed at the end-user terminal. The
agents collect the end-user related information and send it through the network to
the CEM systems that collect all this information coming from the end-users.
However, the end-users they need to agree to have such software component in the
terminal that gathers very detailed information on the person. On the top of that,
these software components tend to decrease the processing capacity of the terminals
and potentially can end up being the source of bad quality of experience for the
end-user. That is why probe systems are the most popular ones among CSPs to
collect the data from the end-users. They are not intrusive systems and CSPs do not
need the explicit consent of the end-user.
224 D. Delibes Rodriguez and P. Hart

4.3 Metrics

There is no a unified or standard way for the metrics in pre-emptive CEM systems.
CEM is transversal and cross-functional concept that involves the whole organi-
zation with all its departments. Furthermore CEM systems are not standard ones but
they are designed according to the strategy of each CSP. Therefore, the metrics will
depend very much on what is intended with the CEM system and what departments
within the organization will use them. Those metrics can be tailored to each of the
departments and type of the operators. A CSP could use not only a set of metrics
but several ones at the same time. There can be metrics used for benchmarking
purposes, for business, network quality purposes, etc. Those end-user oriented
metrics build from the raw data (KPIs) of other system components are normally
KQIs and those can be also mapped or transformed in other more marketing
oriented ones like for instance NPS (Table 2).

4.4 Introduction of Pre-emptive CEM Systems in CSP’s


Organization

CEM is a holistic and E2E concept and therefore, it affects the whole CSP’s
organization. The introduction of CEM systems has to be done in a Top-down
approach and for that it is essential to get the top management involvement.
Without CSP’s top management involvement, it is not possible to introduce CEM in
the organization. After getting the involvement of the top management, it will be
needed the involvement of the different departments in CSP’s organization. The
reason is that CEM is not a system aimed for the operations department only, as
they were the traditional NMS systems. CEM is something for the whole organi-
zation and brings a huge value also for sales, marketing and customer service
departments. In the design of such systems it has to be taken into account the
different requirements of the different departments that will make use of the system.
The same system can be used by different persons in different departments by
designing different views and different metrics for them.

Table 2 Difference between NMS and CEM systems (adapted from [14])
Network management system Customer experience management system
(NMS) (CEM)
Orientation Network oriented End-user oriented
Focus Network element/system Per user & per service performance (E2E)
performance
Metrics KPI dominated QoE KQI dominated
Analysis Bottom-up approach Top-down approach
Design of Pre-emptive Customer Experience Management Systems … 225

CEM is not a technical monitoring and troubleshooting tool for CSP’s networks.
CEM is mainly a powerful business tool that helps top management and the whole
organization to get a deeper inside of its customers and to understand them better.
To understand their behavior, interests and what they experience when using CSP’s
ser-vices. It helps to find out areas of improvement in the network, services, types
of users, terminals, etc. In order to attract the attention of the top management, those
advantages have to be highlighted. CEM helps retaining CSP’s customers by
increasing their loyalty through improving their experience and satisfaction of
CSP’s ser-vices.
CEM is a transversal and cross-functional concept. It breaks the traditional silos
of the operators. CEM is a concept that the whole department should be aware
about and use it. If an operator wants to become more customer-centric then the
whole organization has to move in that direction. However, this transformation,
when introduced inside the operator, should not be made as revolutionary concept.
This is a key issue in order to introduce CEM in a CSP. CEM is not replacing or
changing the existing departments and the existing systems and tools. It should not
be seen as a revolutionary concept in the way that it is not changing the existing
organization, systems etc. but rather as an evolution or a transformation concept.
The goal of CEM is not to change the existing things and it is not a requirement for
its introduction. CEM is a layer above in the organization that makes the whole
organization to become customer-centric without changing the existing
setup. Concerning the traditional NMS monitoring systems, they will be still used
for network performance, configuration and fault management. In addition, it is
very likely that during the introduction of CEM, the NMS will be integrated to the
CEM system. CEM is not an standard approach and it won’t arise the interest of all
CSPs. Those CSPs more quality and customer oriented they will be the ones more
with the highest chances to get them implemented. Other CSPs that focus their
strategy on low-price services might not show a big interest in them. Before trying
to introduce CEM in a CSP, it is very important to understand what is the business
strategy and market position of the operator. In which position is now and where it
wants to be in the future.
By now, CSPs are getting aware of the benefits of CEM and they are slowly
implementing it. However the big implementation costs, the complexity of making
visible the benefits of it, especially at the very beginning, and the fact that its
holistic approach has to be understood by the whole company and not only a few
departments, that makes the introduction of CEM to move very slowly. Another
problem is that the large network equipment vendors tried to introduce CEM
systems by using a Bottom-up approach instead of the Top-down approach that
those systems require. However, as the author of this paper believes, as well as
the majority of the interviewed experts in the subject, the fact that the visibility of
the benefits of CEM is a difficult thing, that does not mean that the benefits are
not there. In a way or another CEM will be finally introduced, at least, in the most
customer aware operators. The benefits of CEM are and will be visible in the future
not only for the operators but also for their customers. The ultimate goal is to
226 D. Delibes Rodriguez and P. Hart

Fig. 1 CEM Business


Processes (adapted from [12] )

increase the satisfaction of the end-users and get their loyalty. The aim of the author
of this paper was to find out what the whole concept of CEM is and what the
benefits are (Fig. 1).

5 Conclusion

CEM is not only a new technical system for MBB CSPs. Rather than that it is a
whole business concept that MBB CSPs need to introduce in their organizations in
order to become more customer-centric. The aim and benefits of CEM is to increase
loyalty and reduce churn through increasing customer satisfaction by means of
im-proving the perception experience of CSP’s customers for their services.
Pre-emptive CEM systems need to be proactive and in real time rather than
reactive and slow. The goal is to find out the problems even before the end-users
realize about them. This is the main difference compared to VoC systems.
The success of the pre-emptive CEM systems is to gather detailed end-user
information and the content of their sessions. This allows CSPs to obtain a deeper
inside of their customers and to be able to understand them better. The closer those
systems get to the end-user, the better and more accurate information they will get.
CEM systems can be integrated with other systems like CRM, billing etc. However,
in order to obtain the end-user content, probe systems become the most useful ones
because they gather a lot of information from the end-users and those systems are
not intrusive. The main problem of pre-emptive CEM systems is the huge amount
of data storage that they need. The technical architecture of the systems must be
flexible and scalable to overcome space limitations in CSPs.
Design of Pre-emptive Customer Experience Management Systems … 227

The design and the introduction of such CEM pre-emptive systems have to be
Top-down rather than Bottom-up. In order to introduce such system in a CSP it is
essential the top management attention and commitment. CEM systems are trans-
versal and cross-functional and involve many different departments in the organi-
zation. There is no standard way to design a CEM system. It will depend very much
on the business strategy of the CSP. The metrics and views of the system have to be
customized for the different needs of the organizations and their departments.
CEM is not a revolution for CSPs but rather and evolution. They do not replace or
change the existing organization, tools etc. but they rather creates a new layer on top
that helps them to become more user-centric and therefore increase their revenue.

References

1. Ruby Newell-Legner, Understanding Customers


2. Bain & Company
3. Meyer, C., Schwager, A.: Understanding Customer Experience. Harvard. Business Publishing,
Boston, MA (2007)
4. Shaw, C., Ivens, J.: Building Great Customer Experiences. Palgrave MacMillan, New York
(2002)
5. Yankee Group: Truly Managing the Customer Experience (2012)
6. Epitiro: Quality of Experience Measurement (2011)
7. NSN: Acquisition & Retention Study Report (2013). http://nsn.com/system/files/document/
acquisition___retention_white_paper.pdf
8. Astellia: White Paper: Driving Customer Experience. www.astellia.com
9. IDC: Nokia Siemens Networks’ New Strategy Addressing Key Customer Touch Points.
(2012)
10. Informa Telecoms & Media: 4G strategy: successful operators show how LTE can improve
KPIs (2013)
11. Harris Interactive: Customer experience improvement study (2011)
12. NSN: CEM 2.0: High performance customer experience management (2013). http://nsn.com/
news-events/publications/unite-magazine-issue-9/cem-20-high-performance-customer-
experience-management
13. Rao, B.: Managing the Mobile Customer Experience, Alcatel-Lucent. http://www2.alcatel-
lucent.com/techzine/managing-the-mobile-customer-experience/ (2012). Accessed 27 Feb 2012
14. ZTE: ZTE Technologies (Vol. 14, No. 3, Issue 140): Special Topic: QoE, ZTE, June 2012.
www.zte.com.cn
15. Xiaoyan, L.: Moving beyond traditional network KPIs. Huawei (2011). http://www.huawei.
com/es/static/HW-093296.pdf
16. NSN: Customer Experience Management goes beyond managing network KPIs. http://blogs.
nsn.com/customer-experience-2/2013/01/16/do-traditional-network-kpis-tell-the-whole-story/
17. DETECON: Customer Experience Management in der Telekommunikationsbranche (2010)
18. Accenture Institute for Public Service Value. http://www.accenture.com/citizenexperience/
hongkong (2008)
19. Comarch: Telco Survey: Mobile Operators’ CEM Strategies—the Market Reality. https://
www.comarch.com/telecommunications/resources/white-papers/mobile-operators-cem-
strategies-survey/ (2013). Accessed Nov 2013
228 D. Delibes Rodriguez and P. Hart

20. Geller, B., Krahn, O.: Taking Care of the Customer Experience. Alcatel-Lucent. http://www2.
alcatel-lucent.com/techzine/taking-care-of-the-customer-experience/ (2011). Accessed 13 Dec
2011
21. Owens, G., Torres,C.: TechZine: Tracking the Growing Case for CEM. Alcatel-Lucent. http://
www2.alcatel-lucent.com/techzine/tracking-the-growing-case-for-cem/ (2012). Accessed 5
Nov 2012
Economic Denial of Sustainability
Mitigation in Cloud Computing

Massimo Ficco and Massimiliano Rak

Abstract Cloud Computing is a large-set of resources and services offered through


the Internet according to a on-demand self service model. In particular, the cloud
elasticity allows customers to scale-up their applications in order to provide services
to a larger number of end-users. The provided services are charged based on a
pay-per-use business model. According to such a model, Distributed Denial of
Service attacks can be transformed in a new breed of attacks, which target the cloud
flexibility, in order to inflict fraudulent resource consumptions. In this paper, we
proposed an approach to mitigate such new kind of threats in Cloud Computing,
which have direct effects on the customer costs and not only on the service per-
formance perceived by end-users.


Keywords Cloud security Service level agreement  Economic denial of sus-
 
tainability Intrusion prevention Attack mitigation

1 Introduction

Cloud Computing is nowadays an affirmed technology, largely adopted for any


kind of services, which founds on few key concepts: the on-demand self service,
which enables customers to acquire and access resources without human interac-
tions, according to a pay-per-use business model. Following such a model, cloud
applications run on resources dynamically provided by Cloud Service Providers

M. Ficco (&)  M. Rak


Department of Industrial and Information Engineering, Second University of Naples (SUN),
Via Roma 29, 81031 Aversa, Italy
e-mail: massimo.ficco@unina2.it
M. Rak
e-mail: massimiliano.rak@unina2.it

© Springer International Publishing Switzerland 2016 229


C. Rossignoli et al. (eds.), Organizational Innovation and Change,
Lecture Notes in Information Systems and Organisation 13,
DOI 10.1007/978-3-319-22921-8_18
230 M. Ficco and M. Rak

(CSPs) and offer their services to end-users. By exploiting the cloud elasticity, such
applications are able to self-scale, improving and reducing the amount of needed
resources, depending on the end-users requests. On the other hand, due to their
openness to the Internet, applications are prone to cyber attacks, such as Distributed
Denial of Service (DDoS), which aim at reducing the service’s availability and
performance, by exhausting the resources of the service’s host system (including
memory, processing resources, and network bandwidth) [1].
As presented by Francois et al. [2], a resource competition approach can be
adopted for mitigating DDoS attacks to cloud applications and services. Therefore,
additional resources should be either already acquired, or acquired dynamically on
demand, in order to face peaks of load due to DDoS attacks. On the other hand,
such resources are not free, and a cyber attack could make it economically pro-
hibitive. Such a scenario is called ‘Economic Denial of Sustainability’ (EDoS) [3,
4]. The inability of the cloud service infrastructure to diagnose the causes of service
performance degradation (i.e., if it is due to either an attack or an overloading), can
be considered as a security vulnerability, which can be exploited by attackers in
order to exhaust all the cloud resources allocated to satisfy the negotiated Quality of
Service (QoS).
The challenge in this research area is to mitigate attack effects, by minimizing
the number of fraudulent malicious sources. In previous papers [5], we presented a
strategy to generate patterns able to perform EDoS attacks against cloud applica-
tions, exhibiting a stealthy behavior which cannot be distinguished from a common
user sequence of requests. Therefore, specific detection strategies should be defined,
which should be able to analyze single client flow in order to differentiate legitimate
users from malicious uses which consume a significant volume of resources in a
very short time. Limiting the impact of individual malicious source reduces the
overall effects of an EDoS attack. However, considering the large number of clients
(several thousand) used during such assaults, and the difficult of detecting the
anomalous behavior associated to each attack flow, a lot of time may elapse until
the attack can be stopped.
Therefore, in this paper, we propose a solution for mitigating economic effects
of EDoS attacks against cloud applications. Although, we are not able to clearly
identify the sources of an EDoS attack, the proposed approach can be used to
mitigate the effects of such a kind of attack, in the meantime that, all the sources
are identified and the attack is stopped by a more sophisticated countermeasure.
The proposed approach has a side effect of reducing the quality of services
offered to end-users, but granting the economic sustainability of the provided
services.
The rest of the paper is organized as follows:
Section 2 presents the related work in the field of detection of EDoS attack
against cloud applications. Section 3 describes EDoS characteristics. The proposed
mitigation approach is presented in Sects. 4 and 4.1. Section 4.2 describes the
implementation of the adopted framework. Section 5 presents a short summary and
future work.
Economic Denial of Sustainability Mitigation in Cloud Computing 231

2 Related Work

Several works have proposed techniques for mitigating EDoS attacks, which can
inflict significant effects on cost incurred on legitimate customers.
In order to reduce the number of application-level EDoS connection requests,
HinKhor and Nakao [6] have proposed a mitigation mechanism, which requires a
proof of work from the end-user, before providing new resources to the client.
According to such a mechanism, end-users have to solve a crypto-puzzle, which is
used by the server to encrypt the communication channel. However, such mecha-
nism can be exploit by attackers. They send a huge number of requests for puzzles
without solving them, which can lead to an exhausting DDoS attack against the
puzzle-server. Moreover, adding defenses, such as sophisticated filter, should be
adopted to verify whether the incoming requests are from a legitimate user or
generated by bots [7].
Several works have proposed overlay networks to hide the location and the
characteristics of the target application, and thus, prevent DoS attacks [8, 9].
However, due to indirection, the overlay routing could increase the end-to-end
latency.
Chonka et al. [10] have proposed a service oriented trace back architecture to
protect from the XML-DDoS attacks. It is used to identify the sources of the attack
and filter it. The cloud protector is a trained back propagation neural network.
However, attackers could evade such mechanism by launching their attack through
zombie clients with spoofed IP addresses.
The most practical detection approach should consist in reviewing bills over time
to determine if they are fraudulent consumptions within an expected time range.
Therefore, Amazon [11] offers a Web service to monitor the provided cloud
resources. By using this service, customer can define thresholds to limit the
scaling-up of their cloud resources.

3 EDoS Attack Description

As previously described, in this paper, we focus on a set of attacks that relies on the
typical features of a cloud environment: on-demand self service and resource
pooling. In particular, according to the cloud elasticity notion, the Cloud Service
Customers (CSCs) are enabled to dynamically vary the amount of resources they
need. A clear example of such feature are the self-scaling solutions offered by many
Infrastructure as Service (IaaS) cloud providers: the CSC defines a simple policy
and the CSP automatically increases the amount of resources available to the
end-users, without any additional effort. A side effect of such behavior is that, being
the resources paid depending on their usage, a wrong policy may lead to an
uncontrollable increase of the costs. EDoS attacks aim at exploiting such feature,
making it economically unsustainable.
232 M. Ficco and M. Rak

Fig. 1 Cloud application actors

In order to better describe such kind of attack, in Fig. 1, we present a typical


cloud scenario, in which a cloud based Web provider acquires virtual machines
(VMs) from a CSP (e.g., Amazon), on which runs its customized Web server and
application. The Web application offers services to end-users and acquires new
VMs necessary to satisfy the QoS required by their end-users, when the number of
requests increases [12].
Note that, such a behavior can be technologically implemented in many different
ways: (i) the cloud application may use the CSP’s APIs (e.g., Amazon Aws EC2
APIs) and internally manage the amount of resources to be used; (ii) by using a
service provided by the CSP itself (like the Load Balancing service offered by
Amazon) and customizing the Web application in order to be aware of the possible
multiple instances; or (iii) by using a dedicated PaaS solution that hides to the
developer (i.e., to the cloud application) the presence of self-scalable resources, and
automatically balances the load among the different instances. From end-user point
of view, it is impossible to distinguish among such kind of solutions, but in all the
cases, the result is the same: when the number of requests increases, the amount of
associated resources increases and the application costs increase.
The idea behind an EDoS attack is that, several clients inject malicious legiti-
mate requests (to the cloud application), with a modest low-rate intensity, over an
extended duration of time. The target application reacts increasing the amount of
resources. Such malicious attacker does not care about the technological solutions
adopted, but just on the final behavior.
An EDoS attack should respect few simple characteristics: the attack flows
should be undistinguished from a typical end-user behavior (otherwise they will be
simply filtered out), and the attack sources should be confused among the very large
set of users.
Note that, the trivial solution of making a DDoS attack, starting up a very large
set of malicious requests form a very large set of bot clients, has a clear
counter-effect: the amount of resources needed to perform the attack is as large as
the one needed by the cloud application, so its cost will be very high. Therefore, in
order to perform EDoS attacks, a malicious attacker should find a way to send a
limited set of requests that heavily affect the remote resources. In a previous paper
Economic Denial of Sustainability Mitigation in Cloud Computing 233

[13–15], we proposed a pattern to build such kind of attacks focusing on XML-DoS


attack patterns.
In this paper, we do not present a method to detect the presence of an EDoS
attack, but we focus on the design of an architecture able to protect a cloud
application against such kind of threats. The solution we propose has some clear
trade-offs and can be costly. Specifically, it aims at mitigating the economic
effects, mainly in the cases where the attack is a zero-day attack, i.e., an attack
that has not been seen before and is not contained in a signature database [16].
We assume that, the attacker is able to inject legitimate requests, which are
indistinguishable from normal traffic. Moreover, we assume that the cloud
application offers its services after a given payment. It will simply stop scaling
when a fixed budget is reached.

4 EDoS Protection Approach

In order to reduce the effect of EDoS attacks, we propose an approach based on the
adoption of Service Level Agreement (SLA), combined with an Intrusion
Prevention System (IPS).
An SLA is an agreement among the CSC and the CSP [17]. It specifies the
quality of delivered services, and states the duties of both parties. Moreover, the
SLA describes both the actions a CSP has to perform when a guarantee term is not
respected and the penalties to be paid. The SLA states the guarantees through a set
of Service Level Objectives (SLOs), that quantitatively states the level of quality to
be granted. In our case study, we focus on SLAs offering an SLO, which states the
availability of the service, i.e., the ability of a software application to be in a state
to provide the required function at a given instant of time.
Due to the adoption of SLAs, the problem can be redefined as follows: the cloud
application offers its services to end-users and grants them a fixed availability (e.g.,
it grants that every day the service will be available as 23 h). Moreover, the cloud
application has a penalty associated to service requests, i.e., every day during which
the grant SLO is not respected implies a payment of a fixed penalty to the CSC, if
their end-users are not enabled to access it.
Introduction of the SLAs has the effect of clarifying the trade-off the CSP has to
face: on the one hand, the penalties to be paid to CSC, on the other hand, the costs
of acquiring the resources. Therefore, it is possible to build up a model that takes
into account both the estimation of the cost of scaling (i.e., the cost of additional
resources) and the cost of the missing scale (i.e., the penalties), enabling the cloud
application to perform an acceptable choice, without penalizing the economic
capabilities of the system.
The mitigation approach consists in splitting the end-users in classes on the base
of the IP addresses and the penalty cost defined by SLA. Then, the access to the
service is denied to the class of IP addresses that more likelihood contains the larger
number of malicious sources. This approach allow to maintain the costs of the
234 M. Ficco and M. Rak

infrastructure acceptable, accepting to pay to the CSCs the service unavailability to


some of the end-users.

4.1 Mitigation Algorithm

The algorithm represented Fig. 2 describes the proposed mitigation approach.


Specifically, if the QoS specified in the SLA is not satisfied, the algorithm computes
the penalty (Cexclusion) associate to each class Class of end-users. Then, it denials
the access to the class of end-users with the least penalty, for a time shorter than that
specified in the SLA. Such a process is repeated until either a class contains a large
number of malicious sources, such that, the service availability contracted with the
CSC is satisfied, or the penalty to be paid for the exclusion of the choose class is

Fig. 2 Core algorithm orchestrating the trade-off


Economic Denial of Sustainability Mitigation in Cloud Computing 235

greater than the cost necessary to add new resources. In particular, assuming that
CPUbase is the base price charged for the smallest amount of CPU time allocation,
and RCPU is the CPU to be added at the new allocation, CPUadded is the fee charged
if the system scales-up. If the class of IP addresses to be denied is not identified,
then, the system is scaled-up.

4.2 EDoS Protection Framework

In order to build up an IPS against EDoS attacks in the cloud, we propose an


architectural model, that can be implemented adopting commercial-off-the-shelf
software components [18].
In Fig. 3 is shown the overall organization of an EDoS protected cloud appli-
cation. Together with the application containers (like Tomcat J2EE or others, which
hosts the cloud application), we have additional software components: (i) the
self-scaling and load-balancer components, (ii) the SLA component, and (iii) the
EDoS protection system. Note that, the components self-scaling and load-balancer,
as outlined before, can rely on many different technologies (offered by provider or
others), and we do not focus on it. The SLA component is the one able to maintain
the agreed SLAs and reports the information associated to them. The EDoS pro-
tection is the component that identifies whether or not to scale. SLA and EDoS
protection may rely on the same node.

Fig. 3 IPS architecture in cloud


236 M. Ficco and M. Rak

In order to illustrate how the solution work, we focus on a self-scaling system


completely under the cloud customer control: we can ask the system to scale-up or -
down controlling it directly. The proposed solution can be than integrated with
different technological solutions, such as Oneflow for a private cloud based on
Opennebula, and the Amazon LoadBalancer for the public cloud offered by
Amazon.
The SLA component is a software that stores the SLAs agreed with the customer
and offers services that enable the cloud application to retrieve the possible grants.
In this example, for simplicity’s sake, the SLA component assumes that only one
SLA is offered to all the customers and a single fixed policy. We assume a SLA that
grants a maximum daily unavailability period, i.e., if in a day a service results
unavailable to end-users for more than the fixed amount of time, a penalty will be
paid. Note that, the algorithm does not change, if we assume a yearly or monthly
acceptable unavailability period. Thus, the SLA component provides the following
information:
• Nusers: total number of users registered with an SLA;
• Acceptable_Unavailability: the number of minutes in a day the service can be
unavailable;
• Penalty: the penalty to be paid if the service is not available for more than the
granted period; and
• Cost: the daily rate paid by each end-user.
Our EDoS protection system founds on a master-slave architecture: agents,
distributed over the systems monitor and collect information from local log files and
security probes. The Protection server collects such monitoring information and
controls the load-balancer/self-scaling component. The monitoring agents are able
to collect the following state information from each node:
• CPU_usage[i]: percentage of CPU usage in the last 10 min on node i; and
• Mem_usage[i]: percentage of memory free in the last 10 min on node i.
Moreover, the agents collect the following information from the service logs:
• Time_out_services_Minute[x, y]: number of services ended in timeout in the
minutes from x minute ago to y minute ago (with x > y);
• Time_out_services_Totale[i]: number of services ended in timeout during the
day;
• Users_IP: the IP of each user logged during the day on the system; and
• Actual_Users_IP: the IPs of end-users actually invoking the services.
The EDoS Protection system continuously gathers the IP information and tags
them on the basis of the IP location. Moreover, both historical and instantaneous
information are collected. The system maintains the list of the daily locations (i.e.,
the location that has at least one IP actives on them) together with the number of
daily and instantaneous IP connected.
In order to implement the proposed algorithm, we adopted the following
approach:
Economic Denial of Sustainability Mitigation in Cloud Computing 237

we activate the algorithm only if the cost of the resources is higher than the half
of the (cost_paid_by_user)*(num_users). Moreover, we adopt the following scaling
policy: we acquire new resources when CPU_usage and MEM_usage of all nodes
are higher than 95 % in the last 10 min and Timeout_Services_ Minute[10, 0] is
higher than Timeout_Services_Minute [20, 10]. In other word, we scale only if all
the resources are busy and the services are going to be unavailable.

5 Conclusions

In this paper, we proposed an approach to mitigate the effects of EDoS attacks


against cloud applications.
In future work, we will study a more sophisticated detection strategy in order to
differentiate legitimate end-users from malicious clients. For example, based on the
kind of service provided by the cloud application, could be identify a set charac-
teristics that a normal client behavior should satisfy. Clients that do not meet such
behavior characteristics could be flagged as malicious. Moreover, a countermeasure
adopted to stop the attack could be based on IP trace back scheme, which provides an
effective way to trace the source of EDoS attacks to its point of origin [19, 20, 21].

Acknowledgments This research is partially supported by the European Community’s Seventh


Framework Programme (FP7/2007-2013) under Grant Agreements no. 610795 (SPECS), as well
as the MIUR under Projects “DISPLAY” (PON02_00485_3487784) and “MINIMINDS”
(PON02_00485_3164061) of the public private laboratory “COSMIC” (PON02_00669).

References

1. Ficco, M., Tasquier, L., Di Martino, B.: Interconnection of federated clouds. In: Intelligent
Distributed Computing VII, Studies in Computational Intelligence, 2014, vol. 511, pp. 243–248
2. Francois, J., Aib, I., Boutaba, R.: Firecol, a collaborative protection network for the detection
of flooding DDoS attacks. IEEE/ACM Trans. Networking 20(6), 1828–1841 (2012)
3. Baig, Z.A., Binbeshr, F.: Controlled virtual resource access to mitigate economic denial of
sustainability (EDoS) attacks against cloud infrastructures. In: Proceedings of the International
Conference on Cloud Computing and Big Data, Dec 2013, pp. 346–353
4. Kumar, M.N., Sujatha, P., Kalva, V., Nagori, R., Katukojwala, A.K., Kumar, M.: Mitigating
economic denial of sustainability (EDoS) in cloud computing using in-cloud scrubber service.
In: Proceedings of the 4th International Conference on Computational Intelligence and
Communication Networks, 2012, pp. 535–539
5. Ficco, M., Rak, M.: Stealthy denial of service strategy in cloud computing. IEEE Trans. Cloud
Comput. 13(4), 737–751 (2014)
6. HinKhor, S., Nakao, A.: sPoW: On-demand cloud-based eDDoS mitigation mechanism. In:
Proceedings of the 5th Workshop on Hot Topics in System Dependability, 2009, pp. 1–6
7. Sqalli, M.H., Al-Haidari, F., Salah, K.: EDoS-shield—a two-steps mitigation technique against
EDoS attacks in cloud computing. In: Proceedings of the 4th IEEE International Conference
on Utility and Cloud Computing, 2011, pp. 49–56
238 M. Ficco and M. Rak

8. Beitollahi, H., Deconinck, G.: Fosel: Filtering by helping an overlay secure layer to mitigate
dos attacks. In: Proceedings of the 7th IEEE International Symposium on Network Computing
and Applications (NCA), July 2008, pp. 19–28
9. Ping, D., Nakao, A.: DDoS defense as a network service. In: Proceedings of the IEEE Network
Operations and Management Symposium (NOMS), Apr 2010, pp. 894–897
10. Chonka, A., Xiang, Y., Zhou, W., Bonti, A.: Cloud security defence to protect cloud
computing against HTTP-DoS and XML-DoS attacks. Int. J. Netw. Comput. Appl. 34, 1097–
1107 (2011)
11. Amazon CloudWatch, Amazon Website, available at http://aws.amazon.com/cloudwatch/,
May 2014
12. Yu, S., Tian, Y., Guo, S., Oliver Wu, D.: Can we beat DDoS attacks in clouds? IEEE Trans.
Parallel Distrib. Syst. 25(9), 2245–2254
13. Ficco, M., Rak, M.: Intrusion tolerant approach for denial of service attacks to web services.
In: Proceedings of the 1st International Conference on Data Compression, Communications
and Processing (CCP), June 2011, pp. 285–292
14. Ficco, M., Rak, M.: Intrusion tolerance as a service: a SLA-based solution. In: Proceedings of
the 2nd International Conference on Cloud Computing and Services Science (CLOSER), Apr
2012, pp. 375–384
15. Ficco, M., Rak, M.: Intrusion tolerance of stealth DoS attacks to web services. In: Information
Security and Privacy, LNCS, vol. 376, pp. 579–584, 2012
16. AlEroud, A., Karabatis, G.: Toward zero-day attack identification using linear data
transformation techniques. In: Proceedings of the IEEE 7th International Conference on
Software Security and Reliability (SERE), 2013, pp. 159–168
17. Amato, A., Venticinque, S.: Multi-objective decision support for brokering of cloud SLA. In:
Proceedings of the 27th International Conference on Advanced Information Networking and
Applications Workshops, 2013, pp. 1241–1246
18. Ficco, M., Rak, M., Di Martino, B.: An intrusion detection framework for supporting SLA
assessment in cloud computing. In: 4th International Conference on Computational Aspects of
Social Networks (CASoN 2012), Sao Carlos, Brazil, Nov 2012, pp. 244–249
19. Ficco, M.: Security event correlation approach for cloud computing. J. High Perform. Comput.
Networking 7(3), 173–185 (2013)
20. Joshi, B., Vijayan, A.S., Joshi, B.K.: Securing cloud computing environment against DDoS
attacks. In: Proceedings of the International Conference on Computer Communication and
Informatics (ICCCI), 2012, pp. 1–5
21. Coppolino, L., D’Antonio, S., Formicola, V., Romano, L.: Enhancing SIEM technology to
protect critical infrastructures. In: Critical Information Infrastructures Security, LNCS, vol.
7722, no. 2013, pp. 10–21
Brokering of Cloud Infrastructures Driven
by Simulation of Scientific Workloads

Alba Amato, Beniamino Di Martino, Fatos Xhafa


and Salvatore Venticinque

Abstract Cloud Computing has demonstrated to be attractive for different appli-


cation fields, including scientific ones, that have already benefited from distributed
environments like Grid. Nevertheless the main Grid model is static, so users cannot
add or modify computational resources in accordance to their needs. Besides it is
not possible to dynamically modify the resources on the basis of the real system
workload. Elastic computing and pay per use business model of Cloud paradigm
have been investigated to build a Grid infrastructure over virtual resources. In this
paper we propose the integrated utilization of simulation techniques and service
brokering to provide a decision support to the user, when it needs to choose the best
Cloud infrastructure and provider that satisfy the performance requirements of its
scientific application, whose workload is known.

Keywords Multi-agent systems  Broker  Cloud computing  Grid computing

1 Introduction

Until 20 years ago, users of computing environments could count just on a number
of resources that did not allowed the resolution of problems on a large scale. As a
result, also due to the high costs of acquisition and management of large computing

A. Amato (&)  B. Di Martino  S. Venticinque


Second University of Naples, Caserta, Italy
e-mail: alba.amato@unina2.it
B. Di Martino
e-mail: beniamino.dimartino@unina2.it
S. Venticinque
e-mail: salvatore.venticinque@unina2.it
F. Xhafa
Universitat Politcnica de Catalunya (UPC), Barcelona, Spain
e-mail: fatos@lsi.upc.edu

© Springer International Publishing Switzerland 2016 239


C. Rossignoli et al. (eds.), Organizational Innovation and Change,
Lecture Notes in Information Systems and Organisation 13,
DOI 10.1007/978-3-319-22921-8_19
240 A. Amato et al.

systems, it has been spreading the idea of using resources that are not homogeneous,
located in different sites, which are aggregated to form large distributed computing
centers. Therefore, the idea of the computation is increasingly linked to the concepts
of collaboration, sharing of resources, and we have seen the emergence of new
computing paradigms and protocols that would allow interaction between distributed
resources. Generally Grid is an infrastructure for hardware and software that allows
to take advantage of a large amount of resources, in aggregate, providing high
computing power and storage. These resources are typically heterogeneous and
geographically distributed and they are accessed through abstract interfaces and
unitary, that hide the complexity of multi-level infrastructure. Nevertheless the main
Grid model is static, so users cannot add or modify computational resources in
accordance to their needs. Besides it is not possible to dynamically modify the
resources on the basis of the real system workload. Another, more recent paradigm
of distributed computing is Cloud Computing, that has spread first in areas other than
strictly scientific (like Amazon and e-commerce). From the point of view of access to
the infrastructure of computing, Cloud Computing can be seen as an evolution of the
Grid, since it uses web-based technologies and utilizes the hardware virtualization
as a basis for distributed computing infrastructure. Cloud computing also pro-
vides various levels of abstraction to identify resources, viewed through ser-
vice models, such as Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS), or
Infrastructure-as -a-Service (IaaS). Cloud Computing is attracting new applications,
such as scientific ones, that have benefited from distributed environment like Grids.
For those reasons, in this paper we propose an approach that provides the flexibility
of Cloud Computing avoiding the need for users to learn new resources access and
use models, using the combination of the Grid and Cloud paradigms.

2 Related Work

Both Grid and Cloud are technologies that have been conceived to provide users
with handy computing resources according to their specific requirements. Grid was
designed with a bottom-up approach [16]. Its goal is to share hardware or software
among different organizations by means of common protocols and policies. The
idea is to deploy interoperable services in order to allow the access to physical
resources (CPU, memory, mass storage, …) and to available software utilities.
Users get access to a real machine. Grid resources are administrated by their
owners. Authorized users can invoke Grid services on remote machines without
paying and without service level guarantees. A Grid middleware provides a set of
API (actually services) to program an heterogeneous, geographically distributed
system. On the other hand, Cloud technology was designed using a top-down
approach. It aims at providing its users with a specific high-level functionality: a
storage, a computing platform, a specialized service. They get virtual resources
from the Cloud. The underlying hardware/software infrastructure is not exposed.
The only information the user needs to know is the QoS of the services he is paying
Brokering of Cloud Infrastructures Driven by Simulation … 241

for. Bandwidth, computing power, storage represent parameters that are used for
specifying the QoS and for billing. Cloud users ask for an high level functionality
(Service, Platform, Infrastructure), pay for it and become owners of a virtual
machine. From a technological point of view, virtualization is exploited to build an
insulated environment, which is configured to meet users’ requirements and is
exploited for easy reconfiguration and backup. A single enterprise is the owner of
the Cloud platform (software and underlying hardware), whereas customers become
owners of the virtual resources they pay for. Cloud supporters claim that the Cloud
is easy to be used [16], is scalable [13], always gives users exactly what they want.
On the other hand, Grid is difficult to be used, does not give performance guar-
antees, is used by narrow communities of scientists to solve specific problems and
does not actually support interoperability [16]. Grid fans answer [8] that Grid users
do not need a credit card, that around the world there are many examples of
successful projects, that a great number of computing nodes, connected across the
net, executes large-scale scientific applications, addressing problems which could
not be solved otherwise. Grid users can use a reduced set of functionalities and can
develop simple applications, or they can get, theoretically, infinite amount of
resources. As always, truth is in the middle. Some users prefer to pay since they
need a specific service with strict requirements, and require a guaranteed QoS.
Cloud can provide this. Many users of the scientific community look for some sort
of super-computing architecture to solve intensive computations that process huge
amount of data, and do not care about getting a guaranteed performance level. The
Grid can provide it. But, even on this last point, there are divergent opinions. To
understand why Grids and Clouds should be integrated, we have to start from
considering what the users want and what these two technologies can provide. Than
we can try to understand how Cloud and Grid can complement each other, and why
their integration is the goal of intensive research activities [15]. We know that a
supercomputer runs faster than a virtualized resource. For example, a LU bench-
mark on EC2 (the Cloud platform provided by Amazon) runs slower, and some
overhead is added to start VMs [9]. On the other hand, the probability to execute an
application in fixed time on a Grid resource depends on many parameters and
cannot be guaranteed. As experimented in [9], if 400 ms is the time that an EC2
requires to execute an LU benchmark, the probability of obtaining a Grid resource
in less than 400 ms is very low (34 %), even if the same benchmark can take less
than 100 ms to complete. If you want to get your results as soon as possible, you are
adopting the Cloud end-user perspective. If you want to look for the optimum
resources that solve the problem, overcoming the boundaries of a single enterprise,
you are using the Grid perspective that aims at optimizing resources sharing and
system utilization. The integration of Cloud and Grid, or at least their integrated
utilization, has been proposed in [14] since there is a trade-off between application
turnaround and system utilization, and sometimes it is useful to choose the right
compromise between them. Some issues to be investigated have been pointed out:
Integration of virtualization into existing e-infrastructures; Deployment of Grid
services on top of virtual infrastructures; Integration of Cloud-base services in
242 A. Amato et al.

e-infrastructures; Promotion of open-source components to build Clouds; Grid


technology for Cloud federation.
In light of the above, the integration of the two environments is a debated issue
[16]. At the state of the art, two main approaches have been proposed:
• Grid on Cloud: a Cloud IaaS (Infrastructure as a Service) approach is adopted to
build up and to manage a flexible Grid system [7]. Doing so, the Grid mid-
dleware runs on a virtual machine. Hence the main drawback of this approach is
performance. Virtualization inevitably entails performance losses as compared
to the direct use of physical resources.
• Cloud on Grid: the stable Grid infrastructure is exploited to build up a Cloud
environment. This solution is usually preferred [10, 12] because the Cloud
approach mitigates the inherent complexity of the Grid. In this case, a set of
Grid services is offered to manage (create, migrate …) virtual machines.
The use of Globus workspaces [12], along with a set of Grid services for the
Globus Toolkit 4 is the prominent solution, as in the Nimbus project [23].
The integration could simplify the task of the HPC user to select, to configure
and to manage resources according to the application requirements. It adds flex-
ibility to exploit available resources, but both the above presented approaches have
serious problems for overall system management, due to the complexity of the
resulting architectures. Performance prediction, application tuning, benchmarking
are some of the relevant activities that become critical, and that cannot be performed
in the absence of performance evaluation of Clouds.

3 Problem Statement

Here we address the execution of Grid workload over Cloud infrastructures


according to the Grid on Cloud approach introduced into the previous section.
Because of the two layers the optimal configuration depends on the effectiveness of
both the resource allocation strategy and of the resource provisioning.
Scheduling in Grid to solve the resource allocation problem is known to be really
challenging, but has been also well investigated. Many kinds of solutions exist and
can be used for efficient Grid scheduling, such as meta-heuristics, static, etc. On the
other hand resource provisioning is as important as the scheduling problem, in fact
the computing infrastructure must be dimensioned to avoid both over-load and
under-load conditions. Intelligent resource provisioning plays a key role in ensuring
that the benefits of Cloud computing will be widely enjoyable. In fact Cloud
computing allows to dynamically reconfigure the computing infrastructure
according to the changing requirement of the user’s application. However the SLA
requirements of different applications are different. The transactional applications
require response time and throughput guarantee, while requirements of
non-interactive batch jobs concern performance (e.g. completion times). In the case
of batch jobs it can be predicted to a higher degree [18]. On the other hand resource
Brokering of Cloud Infrastructures Driven by Simulation … 243

demand of transactional applications such as Web applications tends to be highly


unpredictable and busty in nature [6]. Hence, the optimization of both users’ and
providers’ objectives, such as the satisfaction of complex and different application’s
requirements, the minimization of violation of different types of SLAs, the smart
utilization of Cloud infrastructure, it is far from being trivial [11].
We propose in the following sections the integrated utilization of techniques,
which allow to evaluate the best computing infrastructure starting from the Grid
workload characterization and the scheduling strategy, and brokering techniques for
the resource provisioning. As we already stated in the related work, the Cloud on
Grid approach has been widely studied. In particular many contributing in the
related work propose to reuse well known techniques to schedule virtual resources,
represented as generic jobs, over physical ones. However here we focus on the Grid
on Cloud approach.
The methodology is driven by the application requirements and supports
deployment of the a specific scientific Grid workload in the Cloud.

4 Scientific Workload Simulation

The first technique we are going to integrate simulate the execution of a scientific
workload over a distributed cluster of heterogeneous computing resources.
HyperSim Simulator has been developed for the Grid environments [21]. The
chosen simulator is highly customizable which allows us to specify the details
quite well Grid environment we want to simulate. Another benefit of using this
simulator is the statistical information provides us with the results of each exe-
cution that will be useful in the comparison of the various solutions proposed in
this project for the “job-scheduling”. It is possible because repeatability of the
output for the same value of the initial parameters is guaranteed [22]. The aim is
to generate Grids of different sizes and characteristics using the simulator [20].
That would be very useful in practice for studying the efficiency of different types
of algorithms. For the sake of exemplification, we used the simulator to obtain
different grid size scenarios that are very useful to test the performance of heu-
ristics and meta-heuristics, such as genetic algorithms, for scheduling and resource
allocation in Grid systems. Four scenarios are considered, according to the grid
size (small: 32 hosts/512 tasks; average: 64 hosts/1024 tasks; large: 128
hosts/2048 tasks; very large: 256 hosts/4096 tasks). The web interface, available at
http://weboptserv.lsi.upc.edu/WEBGRID/ offers a simple and friendly way to
introduce step-by-step some parameters used for solving a problem instance. With
this application it is possible remotely execute several programs that solve the
problem specified. The scheduler is an important functional component of any
distributed system. In particular, schedulers are central to large-scale distributed
systems such as Grid systems. The purpose of the schedulers is to efficiently and
244 A. Amato et al.

optimally allocate tasks originated by applications to a set of resources; in general,


both tasks and resources could be dynamically added/dropped to/from the system.

4.1 Independent Batch Scheduling

The scheduling problem type, which can be solved by using this web application, is
defined as an Independent Job Scheduling problem, in which tasks are processed in
the batch mode. The main characteristics of this kind of scheduling in distributed
systems is the requirement over tasks, arranged in batches, to be executed inde-
pendently on the resources [19]. Independent scheduling is very suitable to address
in Grid systems especially in case of verification of the security assurance condi-
tion. The absence of dependencies among tasks makes it easier to pre-empty or
re-schedule tasks. The resource characteristics can be also better exploited due the
variation of independent tasks on the computation grain. The problem formulation
in this approach is based on the Expected Time to Compute (ETC) matrix model, in
which an instance is defined by the following input data:
• The workload vector, which defines the computational loads of the tasks in the
batch (usually in millions of instructions);
• second, MIPS);
• The estimation of the prior load of each available machine (expressed in the
terms of ready times of the machines);
• The ETC matrix, which defines the estimations of the times needed for the
completion tasks in machines (each ETC entry is defined for a given
task-machine pair). The size of the ETC matrix is (number tasks) × (number
machines).

4.2 Solutions Representation

There are two basic methods of the solution (schedules) representation in Grids,
namely direct representation and permutation-based representation.
In the direct representation each schedule is defined as the schedule vector x,
coordinates of which are the numbers of machines, to which the particular tasks are
assigned, i.e. x = [xi, …, xnumber tasks] and xi denotes the number of machine to
which task i is assigned. An example of the schedule for 4 machines and 9 tasks:
x = [2, 3, 1, 1, 3, 4, 2, 2, 1]. In permutation-based representation for each machine
there is a sequence of tasks assigned to that machine defined. The tasks in the
sequence are increasingly sorted with respect to their completion times. Then all
task sequences are concatenated into one global vector, which is in fact the per-
mutation of tasks to machines. In this representation some additional information
Brokering of Cloud Infrastructures Driven by Simulation … 245

about the numbers of tasks assigned to each machine is required (an additional
vector must be kept).

4.3 Grid Users Game

Using this web application there is possible to solve the scheduling problem
expressed as a problem of optimal resource utilization from the Grid users per-
spective under additional scheduling criteria: security and task abortion. The Grid
scheduling problem is formalized as a non-zero sum game of the Grid users, who
try to find the best assignment of their batch of tasks to resources. The users cost
functions are interpreted as the joint costs of the secure execution of their tasks, the
costs of possible task abortions (as the results of the machines unreliability and Grid
dynamics), and the costs of the utilization of resources. Then the game cost function
is minimized, at global and local (user) levels. The game cost function is defined as
an objective of the scheduling. To define the Grid users’ game the following setting
has to be specified:
• The number of Grid Users, which is the number of players in the game;
• The users’ tasks sets (pools), which are the players decision variables in the
game. The total number of tasks of all users is the total number of tasks in a
given batch;
• The users’ cost functions.

4.4 Game Scenarios

Two scenarios of the users game are applied in this approach:


• A non-cooperative symmetric game, in which it is assumed that Grid users
cannot cooperate with each other and the resource usage privileges are the same
for all users (in this approach it means that the number of tasks is the same for
each user).
• A Stackelberg game, which is an asymmetric two-level game, where there is one
player selected, who has a privileged access to resources. This player is called
Leader and he is responsible for computing a planning of his tasks, which is
usually a large fraction of the total pool of tasks in the batch. The rest of the
players, called Followers, try to select the best strategy for the assignments of
their tasks subject to Leader’s strategy. The Leader may hold his strategy fixed
while the followers react independently subject to the Leader’s strategy. There is
then necessary to define the additional parameter to this game—“the Lader’s
tasks pool fraction”—which is denotes the portion of the tasks pool owned by
Leader.
246 A. Amato et al.

• The Stackelberg game is then translated into a hierarchical bi-level optimization


problem, which is solved by Genetic Algorithm (GA) at the Leader’s level and
by ad hoc heuristic (PMCT) at the Followers’ level.

4.5 Players Cost

The cost functions defined for each Grid user are defined as the sum of four
following components:
• The tasks execution cost, which is calculated as an average completion time of
the player’s tasks on machines, to which they are allocated;
• The resource utilization cost, which is calculated for each Grid user as an
average idle time of machines on which his tasks are executed;
• The security cost, which is defined as an average wasted time in the result of
tasks failures, because of the high security requirements (the security assurance
condition is not satisfied);
• The task abortion cost, which is defined as an average wasted time in the result
of tasks abortion on machines, because of Grid dynamics or special policies of
the resource owners.
Each component of the players’ cost functions can be activated or not by the web
application user. It means that it is possible to compose several versions of the
players’ cost functions using the components necessary to solve the specified
problem.

5 Cloud Resource Brokering

The brokering problem consists of choosing the best proposal among the number of
offers, which have been received from different providers, who answer to the same
call [2]. To reach the decision about the best proposal, it is necessary to define
user’s requirements and goals that allow to create an evaluation criteria that con-
tains mandatory requirements, checks and evaluates multiple alternatives with
relative values so building complex weighted sum functions depending on criteria
derived from rules stated by a user [4].
The broker collects a number of proposals described in an vendor agnostic way
and chooses the best one(s) according to the brokering rules. The Call For Proposal
(CFP) is the document to be prepared by the customer to specify his requirements in
terms of the list of resources to be acquired and the rules/policies to be used for
defining resource brokering strategies.
As shown in Fig. 1, the CFP is composed of two sections. The first one is the
SLA Template described according to the XML SLA@SOI schema described in
Brokering of Cloud Infrastructures Driven by Simulation … 247

Fig. 1 Broker

[17]. The second section composing the CFP is the Broker Policy, containing a set
of rules, to be enforced by the brokering algorithm, in order to choose among the
different proposals offered by the Cloud market [2]. In particular the SLA template,
described in [3], is composed of Service Properties, that defines the technical
requirements for user’s applications; and the correspondent desired Service Levels,
such as availability, reliability, performance; (Terms of Service) that include the
contract duration, data location and billing frequency, etc.
Broker Policy sets constraints and objectives on multiple parameters such as the
best price per time unit, the greatest number of cores, the best accredited provider or
the minimum accepted availability [5]. As different proposals will come from Cloud
Vendors, the broker have the main task to choose the best proposal according to the
policies specified by the customer such as best price per time unit, maximum
amount of memory, service availability and so on. In order to consistently develop a
Cloud service broker, we propose a model to formulate the application requirements
into constraints that can be architectural constraints and service level constraints
and that can be divided into hard constraints and soft constraints. User selects
properties, which characterize the specific class of chosen service; service levels in
terms of performance, availability, etc.; the cost that he intends to pay for; the
accreditation of the provider, which represents its reputation measured by the
feedback of other users or by some rating agency. For each parameter the user
eventually chooses some constraints, defines if they have to be hard or soft and
specifies none or more objective functions to be optimized. The rules are chosen by
selecting the SLA parameters and setting the required options using a friendly
graphic interface.
Simple constraint rules are in Table 1.
Of course not every constraint can be applied to any SLA parameters. Given a
set of constraints, it is possible that there are several contrasting objectives (e.g. the
minimization of the cost, and maximization of the resources) so it is necessary a
248 A. Amato et al.

Table 1 Rule’s Type


Rule’s name expression Value type Boolean
Exact match Numerical and non numerical ti =s
Value in a set Numerical and non numerical ti 2s
Greater than Numerical ti >s
Less than Numerical ti <s
Value in a range Numerical ti 2s

multi-objective approach to find the Pareto front (that is a set of all those solutions
that are considered to be optimal in multi-criteria optimization). After that, a pos-
teriori approach is used that deliver to the user the set of Pareto-optimal solutions
among which the user will choose the preferred one [1]. Nevertheless, in order to
simplify the usage of the brokering service we allow for grouping multiple
objectives according to the kind of SLA parameter Service Properties, Terms of
Services or Service Levels. We also define the Provider Reputation as an additional
brokering parameter, that is out of the SLA Template, but it is known to the broker.
To compute the overall score we map the domain of each SLA parameter and we
allow to assign a percentage relevance to each category.

6 Integrated Approach

Our methodology will work in two steps: Simulation and Brokering. In the
Simulation step the user needs to describe the application workload in terms of
statistic characterization of jobs inter-arrival and number of instructions. It needs
also to set the final objective in terms of job completion time or queue time. In the
Brokering step it needs to map the abstract resources configured by the simulation
tool to Cloud virtual resources. The broker will take in input number and typology
of such resources, and eventually other constraints and objectives. The output will
be a set of alternatives of Cloud provisioning by heterogeneous providers.
We considered two different options:
• Brokering after Simulation. In this case the user will execute many simulation
runs changing the scheduling strategy, the number and the kind of computing
resources till when the resulting performance satisfies the requirements. The best
configuration of computing infrastructure will be used as input of the brokering
step. The simulation is used to refine the brokering constraints and to reduce the
complexity of brokering. This solution is preferred for Grid users. In fact users’
skill allows to define the optimal computing infrastructure in terms of Grid
resources. Brokering result consists of the best virtualization of the Grid
infrastructure by the available public Cloud services.
• Simulation after Brokering. In this case the user will refine the brokering con-
straints and objective till when there are Cloud proposals that satisfy his
Brokering of Cloud Infrastructures Driven by Simulation … 249

requirements. In the second step the user maps the brokered services to abstract
Grid resources in order to simulate the application workload over the available
proposals. The simulation result can be used to resolve the uncertainty about the
brokering results, which can all belong to the Pareto fronts of equivalent optimal
solutions. This alternative is specially conceived for Cloud users, who better
know the Cloud market and are able to define the brokering requirements better
than configuring a Grid infrastructure.
In both the alternatives the first step is always the more critical one. In fact
results are affected by the user’s expertise and the space of solution is greater than
in the second step.

7 Conclusion

Computational grids are the de facto computing paradigm for large-scale scientific
distributed computation. However the availability of Cloud services delivered by a
pay per use business model provides the opportunity of replacing physical resources
with virtual ones. Scientific workloads can be run according to a Grid on Cloud
approach that complements the Grid strengths with the elasticity of Cloud. We
proposed a methodology that support the user during the configuration and the
provisioning of the computing infrastructure by the integrated utilization of two
different techniques and tool. A Grid simulation tool is used to configure the
number and the kind of resources that optimize the execution of the scientific
workload. A brokering tool supports the resource provisioning by the selection of
the providers that optimize the user’s requirements. Future work will include
experiments and simulations in order to validate the integrated utilization of the two
different techniques and tool.

References

1. Amato, A., Di Martino, B., Venticinque, S.: Agents based multi-criteria decision aid.
J. Ambient Intell. Humaniz. Comput. 5(5), 747–758 (2014)
2. Amato, A., Liccardo, L., Rak, M., Venticinque, S.: Sla negotiation and brokering for sky
computing. In: CLOSER, pp. 611–620 (2012)
3. Amato, A., Venticinque, S.: Multi-objective decision support for brokering of cloud Sla. In:
The 27th IEEE International Conference on Advanced Information Networking and
Applications (AINA-2013). IEEE Computer Society, Barcelona, Spain, 25–28 Mar 2013
4. Amato, A., Venticinque, S.: Modeling, design and evaluation of multi-objective cloud brokering.
Int. J. Web Grid Serv. 11(1), 21–38 (2015). http://dx.doi.org/10.1504/IJWGS.2015.067163
5. Amato, A., Venticinque, S., Di Martino, B.: Evaluation and brokering of service level
agreements for negotiation of cloud infrastructures. In: ICITST, pp. 144–149 (2012)
6. Carrera, D., Steinder, M., Whalley, I., Torres, J., Ayguadè, E.: Enabling resource sharing
between transactional and batch workloads using dynamic application placement. In:
250 A. Amato et al.

Proceedings of the 9th ACM/IFIP/USENIX International Conference on Middleware,


pp. 203–222. Springer-Verlag New York, Inc., New York, NY, USA (2008)
7. Cherkasova, L., Gupta, D., Ryabinkin, E., Kurakin, R., Dobretsov, V., Vahdat, A.: Optimizing
grid site manager performance with virtual machines. In: Proceedings of the 3rd Conference
on USENIX Workshop on Real, Large Distributed Systems, vol. 3, pp. 5–5. WORLDS’06,
USENIX Association, Berkeley, CA, USA (2006)
8. Foster, I.: A critique of using clouds to provide grids. http://ianfoster.typepad.com/blog/2008/
09/a-critique-of-u.html (2008)
9. Foster, I.: What’s faster—a supercomputer or ec2? http://ianfoster.typepad.com/blog/2009/08/
whats-fastera-supercomputer-or-ec2.html (2009)
10. Foster, I.T., Freeman, T., Keahey, K., Scheftner, D., Sotomayor, B., Zhang, X.: Virtual
clusters for grid communities. In: CCGRID, pp. 513–520. IEEE Computer Society (2006)
11. Garg, S.K., Gopalaiyengar, S.K., Buyya, R.: SLA-based resource provisioning for
heterogeneous workloads in a virtualized cloud datacenter. In: Proceedings of ICA3PP’11,
Volume Part I, pp. 371–384. Springer-Verlag, Berlin, Heidelberg (2011)
12. Keahey, K., Foster, I.T., Freeman, T., Zhang, X.: Virtual workspaces: achieving quality of
service and quality of life in the grid. Sci. Programm. 13(4), 265–275 (2005)
13. Myerson, J.: Cloud computing versus grid computing. http://www.ibm.com/developerworks/
web/library/wa-cloudGRID/ (2009)
14. Pandey, A., Pooja: Cloud computing an on demand service platform. In: IJCA Proceedings on
International Conference on Advances in Management and Technology 2013 iCAMT, 5–9
Feb 2013, foundation of Computer Science, New York, USA
15. Rings, T., Caryer, G., Gallop, J., Grabowski, J., Kovacikova, T., Schulz, S., Stokes-Rees, I.:
Grid and cloud computing: opportunities for integration with the next generation network.
J. Grid Comput. 7(3), 375–393 (2009)
16. Sehgal, S., Erdelyi, M., Merzky, A., Jha, S.: Understanding application-level inter-operability:
scaling-out map reduce over high-performance grids and clouds. Future Gener. Comput. Syst.
27(5), 590–599 (2011)
17. SLASOI: Slasoi, http://sla-at-soi.eu/
18. Smith, M., Schmidt, M., Fallenbeck, N., Dornemann, T., Schridde, C., Freisleben, B.: Secure
on-demand grid computing. Future Gener. Comput. Syst. 25(3), 315–325 (2009)
19. Xhafa, F., Abraham, A.: Computational models and heuristic methods for grid scheduling
problems. Future Gener. Comput. Syst. 26(4), 608–621 (2010)
20. Xhafa, F., Alba, E., Dorronsoro, B., Duran, B.: Efficient batch job scheduling in grids using
cellular memetic algorithms. J. Math. Model. Algorithms 7(2), 217–236 (2008)
21. Xhafa, F., Barolli, L., Durresi, A.: Batch mode scheduling in grid systems. IJWGS 3(1), 19–37
(2007)
22. Xhafa, F., Carretero, J., Barolli, L., Durresi, A.: Requirements for an event-based simulation
package for grid systems. J. Interconnect. Netw. 8(2), 163–178 (2007)
23. Youseff, L., Wolski, R., Gorda, B., Krintz, C.: Paravirtualization for HPC systems. In:
Proceedings of the 2006 International Conference on Frontiers of High Performance
Computing and Networking, ISPA’06, pp. 474–486. Springer-Verlag, Berlin, Heidelberg
(2006)
Investigating the Impact of Digital Data
Genesis Dynamic Capability on Data
Quality and Data Accessibility

Elisabetta Raguseo, Claudio Vitari and Giulia Pozzi

Abstract A huge amount of data is created recently in digital forms. Due to the
frequent technological changes and developments that are happening, organisations
need to constantly match with market changes. Therefore they need to develop
dynamic capabilities based on digital data, in order to reach valuable outputs.
Specifically, this study examines whether the development of the Digital Data
Genesis dynamic capability in firms leads to valuable outputs: data quality and data
accessibility. We empirically test our model using a questionnaire-based survey
answered by 125 sales managers. Results suggest that firms able to develop
dynamic capabilities based on digital data obtain higher outputs in terms of data
quality and accessibility. Managerial implications of our results are finally offered.

Keywords Digital data genesis  Dynamic capabilities  Data quality  Data


accessibility

The authors acknowledge the support of the European Community through a Marie Curie
Intra-European Fellowship for providing funds to one author of the paper; the authors also
acknowledge the support of France’s Rhône Alpes region (http://www.rhonealpes.fr/).

E. Raguseo (&)  C. Vitari


Grenoble Ecole de Management, Grenoble, France
e-mail: elisabetta.raguseo@grenoble-em.com
C. Vitari
e-mail: claudio.vitari@grenoble-em.com
G. Pozzi
LIUC - Università Cattaneo, Castellanza, Italy
e-mail: gpozzi@liuc.it

© Springer International Publishing Switzerland 2016 251


C. Rossignoli et al. (eds.), Organizational Innovation and Change,
Lecture Notes in Information Systems and Organisation 13,
DOI 10.1007/978-3-319-22921-8_20
252 E. Raguseo et al.

1 Introduction

A huge amount of data is created in digital forms every day [1]. Managers have the
opportunity to measure, and hence know, radically more about their businesses,
their customers’ tastes, and needs, by analyzing digital data. Explaining whether
and how leveraging on the capability of exploiting digital data can be a way for
firms to achieve success and higher outputs is becoming an evergreen issue in
management and Information Systems (IS) fields.
Previous studies have conceptualized various types of capabilities, categorizing
them in generic, organizational, ordinary, dynamic, heterogeneous, and homoge-
neous [2]. However, since nowadays market changes occur very quickly, focusing
on the development of dynamic capabilities at firm level, based on the exploitation
of digital data, is becoming even more important [3]. Therefore, in this article, we
seek to contribute to the emerging literature on Information Technology
(IT) dynamic capabilities investigating their linkage with possible outputs, as data
quality [4] and data accessibility [5]. In so doing, we innovate in the choice of the
dynamic capability object of our study: Digital Data Genesis (DDG). We define
DDG as the coming into being of digital data. Specifically, DDG represents the
naissance of digital data: it is a phenomenon (an observable fact or event) that
involves the direct generation of new data in digital form, and takes place when
information representative of a physical action, event or condition is created digi-
tally concurrently with the event taking place. DDG thus enables real time digital
representations of objects and events—so that these objects and events can exist as
symbolic representations that can interact and be manipulated in the information
space. For example, when a waiter takes an order using a palm device, an infor-
mational representation of the customer wishes is created in real-time in digital
form.
Thus, since dynamic capabilities allow organizations to reconfigure organiza-
tional capabilities in response to changes in the business environment, and since
data is a precursor to many organizational processes, we decided to study DDG
dynamic capability and their output at firm level.

2 Theoretical Background and Hypotheses Development

Recently, more than before, organisations need to constantly match market changes
by developing dynamic capabilities, defined as “the firm’s processes that use
resources—specifically the processes to integrate, reconfigure, gain and release
resources—to match and even create market change” [6]. Thus, dynamic capabil-
ities have the potential to create, to evolve and to recombine internal existing
resources to allow the firm to adapt continuously to changes [7]. This adaptability
has been argued as offering improved customer value [8], and is especially required
in fast-paced technological environments [9].
Investigating the Impact of Digital Data … 253

We define DDG dynamic capability as the fourfold organizational process of:


(1) “choosing IT” in order to unobtrusively generate and capture data in digital
form; (2) “integrating IT” in the existing processes; (3) “managing digital data” so
produced; and (4) “reconfiguring IT” in the appropriate business processes. The
technology embedded in a DDG initiative may be emerging IT—a new technology
not yet commercially viable (e.g., retinal implants for blind people)—or may be an
enabling IT: an established technology used by a firm in an innovative application
(e.g., RFID in gaming chips to track table play in a gambling context).
We theorize DDG as a dynamic capability for two complementary reasons. First,
it consists of deploying “new configurations of operational competencies relative to
the competition” [10]—in other words, a firm with a DDG dynamic capability can
identify opportunities for digital data generation and for recombining internal
existing resources and data to adapt to changing environmental conditions, through
the collection and production of new digital data. Second, the DDG dynamic
capability includes the dynamic reconfiguring of the existing combinations of
resources for digital data generation [10]. The degree to which an ineffective DDG
process can be reconfigured into a more promising one that matches its environ-
ment, better, faster, and cheaper than the competition determines the capability’s
dynamic quality [6]. Therefore, the higher its degree of reconfigurability, the more
dynamic the DDG dynamic capability is. Examples of DDG dynamic capabilities
exist, such as the Harrah Corporation. For several years, Harrah has systematically
and repeatedly integrated new IT (such as computerized slot machines or RFID
chips) to gain—unobtrusively, and always in new ways—valuable digital data on
customers’ behavior at the Harrah’s casinos and has exploited these new data to
improve its customers’ profiles and to better reward customers.
Furthermore, DDG dynamic capability may aim at outputting accessible, accu-
rate, complete and current digital data. The use in, for example, analytical processes
of the gained digital data will depend on their accessibility, accuracy, completeness
and currency [5, 11, 12]. Specifically, information accessibility is the extent to
which an individual perceives that any particular source is available for use [5].
Information accessibility is the most important driver for information source
selection for use, with people consistently choosing and using lower-quality sources
that are more accessible over higher-quality sources that are less accessible [5, 11,
12].
Since DDG enables informational representations of real objects, facts, and
events without any significant delays (i.e., in real time), the digital format of these
representations may increase their accessibility, which means that the direct output
of DDG can be accessible digital data, which can be exploited for various purposes
such as information processing [13], sophisticated analytics [14] and decision
making and monitoring. Given that digital-data accessibility is a parallel form of
information accessibility (defined by the perceived extent to which any particular
source is available for use), it likely drives information source choices [15].
254 E. Raguseo et al.

Also information quality is important because when sources are equally acces-
sible, individuals will consistently choose and use sources that are perceived of
higher quality [12, 15]. Information Accuracy, Completeness and Currency are
dimensions of the quality of the information retrieved from an information system
[4, 16]. Accuracy refers to the degree to which information correct, unambiguous,
meaningful, believable, and consistent. Completeness is about the degree to which
all possible states relevant to the user population are represented in the stored
information. While currency concerns the degree to which information is up-to-date
and precisely reflecting the current state of the world that it represents.
Harrah’s corporation appreciates the quality and accessibility of the collected
data on customers at the slot machines. For example, basing on the accessibility,
accuracy, completeness and currency of the accumulated transactional data from
past guests, Harrah’s can quickly estimate the customer’s future value within
minutes of the player joining the program. This enables the casino to start treating
the customer according to his or her future value rather than having to wait for
observed play before starting to provide rewards [17].
Based on these considerations, the hypotheses we propose are following listed:
H1: The development of high DDG dynamic capability will positively influence the
data accessibility.
H2: The development of high DDG dynamic capability will positively influence the
data quality (Fig. 1).

DDG output
Data
DDG dynamic capability accessibility
H1
Choosing IT Data quality
Data
DDG dynamic H2 accuracy
Integrating IT
capability
Data
Managing completeness
Digital Data

Data
Reconfiguring currency

Control variables
• Firm size
• Firm age
• Firm industry

Fig. 1 Research model


Investigating the Impact of Digital Data … 255

3 Methodology

3.1 Data Collection

To test our hypotheses, we conducted a questionnaire-based survey between 2011


and 2012 that was delivered to firms located in Western Europe.
Because dynamic capabilities are best measured at the organisational-process level
[18], we surveyed sales managers. We made this choice because sales departments
tend to be advanced with respect to the DDG phenomenon relative to other firm
departments because of their focus on customer relations, in particular [17]. In
addition, we operationalised the model constructs using existing measurement scales
that had previously been tested, with the exception of the Choosing IT dimension of
DDG dynamic capability. This construct measured firms’ ability to select IT to
unobtrusively collect valuable digital data. We conducted a pilot study, beginning
with four indicators from the prior literature [19] that had not been previously tested
empirically. We recruited 35 managers from small, medium, and large enterprises in
different industries in the US. Our four focal indicators were inserted within a set of 26
questions to reduce common method bias. The responses indicated that the scale was
reliable (Cronbach’s alpha = 0.837); for parsimony, we reduced it to three items. For
all other constructs, we used the validated scales.
Before the main data collection, we consulted an expert panel and used Q-sorting
methods both to adapt the chosen scales to our research context and to assess the
scales’ content validity. The expert panel included seven sales managers who
proposed and validated adaptations of the items with respect to each construct. The
Q-sorting involved four rounds of refinement before we reached a threshold of 50 %
of attributions to the correct construct for each item. One hundred and nineteen
respondents (primarily employees of different organisations between 20 and
40 years of age, equally distributed between men and women) participated in the
Q-sorting procedure.
We consulted three sources to ensure heterogeneity in the sample, thus facili-
tating the generalisation of our results. First, we surveyed 220 sales managers using
contacts from customer-relations management applications maintained by a French
business school. Most of those sales managers worked in the Rhône-Alpes region in
which the business school is headquartered. Second, we listed 402 organisations
from the Piedmont region of Italy that previously had participated in an Italian
engineering school’s survey in that same region. Third, we gathered a selection of
370 organisations from Italy’s Veneto region, which represented various corporate
trade-union members in that region and ensured diversity of organisational sectors
and sizes. Our complete sample pool thus included 942 organisations; we contacted
these organisations by telephone or e-mail to request their participation. Data were
primarily collected over the telephone or through face-to-face interviews, although
a few respondents chose to answer autonomously by accessing an online ques-
tionnaire. In this latter case, 3 weeks after the initial mailing, we sent a reminder
postcard to sales managers asking them to complete the survey if they had not
256 E. Raguseo et al.

previously done so. We also said that we would provide the results of the study to
those who completed the questionnaire. 125 questionnaires from different organi-
sations (an overall response rate equal to 21 %) were analysed. Such a high
response rate is a valuable result since it is uncommon in survey research [20].

3.2 Measurement

All the research variables that constitute the DDG dynamic capability were mea-
sured using multi-item Likert scales from 1 (not at all) to 7 (to a large extent) based
on prior empirical research (Table 1) with the exception of the “Choosing IT”
construct, which we empirically tested directly through our pilot study.
Table 1 Survey items for testing the model
Construct Item Survey questions Source
Choosing IT CIT1 Our sales personnel have effective methods for [19]
(CIT) digital data generation choices
CIT2 Digital data generation choices make their case for [19]
our sales process
Integrating IT IIT1 The integration of digital data into the enterprise [21]
(IIT) processes makes our sales personnel more effective
IIT2 Digital data generation is successfully integrated into [21]
our sales processes
Managing MDD1 Our sales personnel effectively handle the digital [22]
digital data data that they obtain
(MDD) MDD2 Our sales personnel effectively process the data [22]
obtained in digital form
MDD3 Our sales personnel have effective methods for [22]
managing the digital data that they obtain
Reconfiguring REC1 When our digital data generation must evolve, our [23]
(REC) sales personnel successfully steer its evolution
REC2 When our digital data generation must evolve, our [23]
sales personnel effectively lead its reorganisation
Data accuracy AC1a Our digital data are incorrect [4]
(AC) AC2 Our digital data contain very few errors [4]
AC2 Our digital data are accurate [4]
Data CO1a Our digital data are incomplete [4]
completeness CO2 Our digital data are comprehensive [4]
(CO)
CO3 Our digital data cover all our data needs [4]
Data currency CU1 Our digital data are recent [4]
(CU) CU2 Our digital data are up-to-date [4]
CU3a Our digital data are obsolete [4]
Data AE1 Our digital data are rapidly available to our Sales [5]
accessibility personnel
(AE) AE2 Our digital data are easily obtainable for our Sales [5]
personnel
a
The variable was reversed while computing the final factor
Investigating the Impact of Digital Data … 257

“Integrating IT” adapts the ability to integrate IT solutions into business pro-
cesses [21]. “Managing digital data” adapts the information-management dimen-
sion of the information capability measurement scale [22] to measure the ability to
manage digital data. “Reconfiguring” adapts the reconfigurability measurement
scale [23] to estimate the potential to reconfigure DDG dynamic capability. The
final construct, DDG dynamic capability, was measured as a second-order construct
based on the four components of DDG dynamic capability, each of which was
compounded as the mean of the related items.
Looking at the outputs, data quality was measured through three variables: data
accuracy, data completeness, and data currency [4]. Instead, data accessibility was
based on the measure proposed by Zimmer et al. [5].
We introduced also control variables in the models: firm size (number of
employees), firm age (number of years since each company was founded), and firm
industry (four dummies for the four industries following list: traditional manufac-
turing, high-tech manufacturing, material services, and information services).

3.3 Data Analysis

We employed SmartPLS and SPSS software for our data analysis. We choose PLS
in SmartPLS as “the most accepted variance-based structural equation modelling
technique because it can accommodate models that combine formative and
reflective constructs” [24, p. 1342]. The PLS path modelling technique with
reflective indicators in Smart-PLS was used to assess the validity and reliability of
the data [25] complemented with SPSS calculations. This approach was better
equipped to handle formative measures [26, 27]. Modelling moderating relation-
ships in PLS required adding moderating variables as direct relationships to out-
come variables and then calculating interaction variables based on the predictor
variables.

4 Results and Analysis

4.1 Respondent Characteristics

The sample of our study was balanced. Specifically, the companies surveyed cover
four industry groups [25] and were almost homogeneously distributed. The
majority of the surveyed companies are between 11 and 20 years of age, with the
oldest at 77 years old. Also in terms of the countries in which firms operate, the
sample is balanced. Finally, the sales-manager respondents are primarily
sales-department directors.
258 E. Raguseo et al.

4.2 Tests for Validity and Reliability of the Measures

Table 2 provides information about the psychometric properties of variables used in


this study. In a confirmatory factor analysis, the loadings of the measures on their
respective constructs ranged from 0.605 to 0.948. We checked the recommended

Table 2 Psychometric table of measurements


Construct Item Loading CR AVE CA
Digital Data Genesis dynamic capability – – 0.936 0.786 0.909
(DDG DC) CIT 0.869 – – –
IIT 0.843 – – –
MDD 0.948 – – –
RIT 0.883 – – –
Choosing IT (CIT) – – 0.908 0.832 0.779
CIT1 0.914 – – –
CIT2 0.910 – – –
Integrating IT (IIT) – – 0.827 0.706 0.728
IIT1 0.879 – – –
IIT2 0.799 – – –
Managing digital data (MDD) – – 0.942 0.843 0.879
MDD1 0.937 – – –
MDD2 0.917 – – –
MDD3 0.900 – – –
Reconfiguring IT (REC) – – 0.946 0.898 0.871
REC1 0.945 – – –
REC2 0.950 – – –
Data accuracy (AC) – – 0.824 0.615 0.604
AC1 0.605 – – –
AC2 0.835 – – –
AC2 0.885 – – –
Data completeness (CO) – – 0.838 0.635 0.689
CO1 0.669 – – –
CO2 0.855 – – –
CO3 0.853 – – –
Data currency (CU) – – 0.837 0.633 0.636
CU1 0.836 – – –
CU2 0.822 – – –
CU3 0.724 – – –
Data accessibility (AE) – – 0.905 0.827 0.747
AE1 0.906 – – –
AE2 0.913 – – –
Note CR Composite reliability; CA Cronbach’s alpha; AVE Average variance extracted
Investigating the Impact of Digital Data … 259

levels for reliability (measured by composite reliability and Cronbach’s alpha) and
average variance extracted (AVE). Nunnally [28] suggests a value of 0.70 as a
benchmark for modest composite reliability. Churchill [29] suggests that a
Cronbach’s alpha value of 0.6 is acceptable. Bagozzi and Yi [30] suggest that AVE
must be higher than 0.50.
In this study, factor loadings, composite reliability and AVE values were gen-
erated as a part of the SmartPLS output. Using SPPS18, the Cronbach’s alpha
scores were computed. The composite reliability (CR) of all constructs range from
0.824 to 0.946, Cronbach’s alphas ranged from 0.604 to 0.909, and AVE ranged
from 0.615 to 0.898, all acceptable results because they are of higher values than
the acceptable thresholds. These results demonstrate convergent validity in the
measurement model.
The square root of average variance extracted for each construct was compared
with the correlations between it and other constructs [31]. Each construct shared
greater variance with its own measurement items than with constructs having dif-
ferent measurement items. Therefore, discriminant validity was also supported.

4.3 Tests of the Research Model

The results of the structural model assessment in SmartPLS are presented in


Table 3. Our results supported Hypothesis 1: the development of Digital Data
Genesis dynamic capability has a significant positive effect on data accessibility
(β = 0.385, t = 4.654, p-value < 0.001). Further, results support Hypothesis 2: the
development of Digital Data Genesis dynamic capability has a significant positive
effect on data quality. Path coefficients of data accuracy, data completeness and data
currency are all positive and significant. Specifically, the path coefficients are
respectively equal to 0.355, 0.269, and 0.366, and all of them are significant with a
p-value less than 0.001.

Table 3 SmartPLS results


Independent Dependent Path t-value Significance Hypothesis R-Square
variable variable coefficient (%)
DDG DC Data 0.385 4.654 *** H1 18.10
accessibility
DDG DC Data 0.355 3.297 *** H2 12.80
accuracy
DDG DC Data 0.269 2.431 ** H2 17.20
completeness
DDG DC Data 0.366 3.958 *** H2 14.80
currency
Note *** denotes p-value < 0.001; ** p < 0.01; * p < 0.05
260 E. Raguseo et al.

5 Discussions

The study results highlight that the theorization of DDG dynamic capability, as a
fourfold organizational process, is supported by the empirical data analysis. More
importantly, DDG dynamic capability aims at outputting accessible, accurate,
complete and current digital data. The data analysis confirms that DDG dynamic
capability releases information resources of higher quality and of higher accessi-
bility. Hence, this better output could be levered to match or create market changes,
as expected by dynamic capabilities [6]. Thus, DDG dynamic capability could
potentially create, as follow-up, significant data output.
DDG dynamic capability makes the information more accessible, hence easily
available for use. As far as, information accessibility is the most important driver
for information source selection for use, the quality of the digital data coming out
from the DDG are at stake. Otherwise low quality but easily accessible digital data
would make the worst combination [5, 10, 12]. Notwithstanding, DDG dynamic
capability increases also the accuracy, the completeness and the currency of the
digital data. Hence, in synthesis DDG dynamic capability delivers higher quality
data.

6 Conclusions

The IS literature provided scant empirical studies that investigate the relationship
between the development of dynamic capabilities based on digital data, and their
output. By analysing a sample of 125 companies, our findings add empirical evi-
dence to the claim that DDG capability is associated with several outputs as data
quality and data accessibility. DDG capabilities make digital data more accessible
and of higher quality to the organisation’s personnel. Specifically, the development
of a DDG dynamic capability enables companies to dispose of more accurate data,
as fewer errors, of more complete data, since data become more comprehensive and
not inconsistent, more current. This happened because thanks to the continuous
ability of generate digital data, these are recent, up-to-date and not obsolete. They
are also more accessible, since digital data are promptly and easily available to sales
personnel. In this way, companies dispose of updated data about their customers
and can take advantage from timely and qualified information about their
customers.
Understanding the effect of DDG dynamic capabilities on data quality and data
accessibility has important managerial implications. First, managers could suc-
cessfully exploit their DDG dynamic capability to develop data-based strategic
initiatives, comforted by their high data quality and data accessibility. Second,
managers of firms should become more aware about the potentiality that the usage
of digital data can have on their business activities and should invest more in the
capability of using digital data.
Investigating the Impact of Digital Data … 261

Our study has also some limitations that will be considered for future studies. First,
a distinction between the effects investigated in information-intensive industries and
non-information-intensive industries may be dissimilar because of a difference in the
importance of information to their business. Second, we could not consider the lon-
gitudinal aspect of the development of DDG dynamic capability on data quality and
data accessibility due to insufficient data. The lagged effect could be larger than the
immediate one.
Future studies are needed to investigate our understanding of how business DDG
dynamic capability can impact differently according to the considered industry.
Furthermore, longitudinal studies should be conducted to understand how long time
span influences the causal relationships investigated.

References

1. McAfee, A., Brynjolfsson, E., Davenport, T.H., Patil, D.J., Barton, D.: Big data. The
management revolution. Harv. Bus. Rev. 90(10), 61–67 (2012)
2. Drnevich, P.L., Kriauciunas, A.P.: Clarifying the conditions and limits of the contributions of
ordinary and dynamic capabilities to relative firm performance. Strateg. Manag. J. 32(3), 254–
279 (2011)
3. Raguseo, E., Vitari, C.: The development of the DDG-capability: an evaluation of its impact
on firm financial performance. Smart organizations need smart artifacts: fostering interactions
between people, technologies, and processes. Springer series, Lecture Notes in Information
Systems and Organisation (LNISO), vol. 7, pp. 97–104 (2014)
4. Nelson, R.R., Todd, P.A., Wixom, B.H.: Antecedents of information and system quality: an
empirical examination within the context of data warehousing. J. Manag. Inf. Syst. 21(4), 199–
235 (2005)
5. Zimmer, J.C., Henry, R.M., Butler, B.S.: Determinants of the use of relational and
non-relational information sources. J. Manag. Inf. Syst. 24(3), 297–331 (2007)
6. Dale Stoel, M., Muhanna, W.A.: IT capabilities and firm performance: a contingency analysis
of the role of industry and IT capability type. Inf. Manag. 46(3), 181–189 (2009)
7. Li, M., Ye, L.R.: Information technology and firm performance: linking with environmental,
strategic and managerial contexts. Inf. Manag. 35(1), 43–51 (1999)
8. Zhou, K.Z., Wu, F.: Technological capability, strategic flexibility, and product innovation.
Strateg. Manag. J. 31(5), 547–561 (2010)
9. Wang, E.T., Hu, H.F., Hu, P.J.H.: Examining the role of information technology in cultivating
firms’ dynamic marketing capabilities. Inf. Manag. 50(6), 336–343 (2013)
10. Galy, E., Sauceda, M.J.: Post-implementation practices of ERP systems and their relationship
to financial performance. Inf. Manag. 51(3), 310–319 (2014)
11. Culnan, M.J.: Environmental scanning: the effects of task complexity and source accessibility
on information gathering behavior. Decis. Sci. 14(2), 194–206 (1983)
12. O’Reilly, C.A.: Variations in decision makers’ use of information sources: the impact of
quality and accessibility of information. Acad. Manag. J. 25(4), 756–771 (1982)
13. Hirsh, S., Dinkelacker, J.: Seeking information in order to produce information: an empirical
study at Hewlett Packard Labs. J. Am. Soc. Inform. Sci. Technol. 55(9), 807–817 (2004)
14. Piccinini, G., Scarantino, A.: Computation vs. information processing: why their difference
matters to cognitive science. Stud. Hist. Philos. Sci. Part A 41(3), 237–246 (2010)
15. Davenport, T.H., Harris, J.G.: Competing on Analytics: The New Science of Winning.
Harvard Business Press, Boston (2007)
262 E. Raguseo et al.

16. DeLone, W.H., McLean, E.R.: Information systems success: the quest for the dependent
variable. Inf. Syst. Res. 3(1), 60–95 (1992)
17. Piccoli, G., Watson, R.T.: Profit from customer data by identifying strategic opportunities and
adopting the ‘Born digital’ approach. MIS Q. Executive 7, 113–122 (2008)
18. Li, T., van Heck, E., Vervest, P.: Information capability and value creation strategy: advancing
revenue management through mobile ticketing technologies. Eur. J. Inf. Syst. 18, 38–51
(2009)
19. Williams, M.L.: Identifying the Organizational Routines in NEBIC Theory’s Choosing
Capability. HICCS, Hawaii (2003)
20. Cycyota, C.S., Harrison, D.A.: What (not) to expect when surveying executives a
meta-analysis of top manager response rates and techniques over time. Organ. Res.
Methods 9(2), 133–160 (2006)
21. Bharadwaj, A., Sambamurthy, V., Zmud, R.: IT Capabilities: Theoretical Perspectives and
Empirical Operationalization. ICIS (1999)
22. Marchand, D.A., Kettinger, W.J., Rollins, J.D.: Information Orientation: The Link to Business
Performance. Oxford University Press, New York (2002)
23. Pavlou, P.A., El Sawy, O.A.: From IT leveraging competence to competitive advantage in
turbulent environments: the case of new product development. Inf. Syst. Res. 17(3), 198–227
(2006)
24. Gruber, M., Heinemann, G., Brettel, M., Hungeling, S.: Configurations of resources and
capabilities and their performance implications: an exploratory study on technology ventures.
Strateg. Manag. J. 31(12), 1337–1356 (2010)
25. Ringle, C.M., Wende, S., Will, A.: SmartPLS release: 2.0 (beta). SmartPLS, Hamburg,
Germany (2005)
26. Chin, W.W., Marcolin, B.L., Newsted, P.R.: A partial least squares latent variable modeling
approach for measuring interaction effects: results from a Monte Carlo simulation study and
electronic-mail emotion/adoption study. Inf. Syst. Res. 14(2), 189–217 (2003)
27. Diamantopoulos, A., Riefler, P., Roth, K.P.: Advancing formative measurement models.
J. Bus. Res. 61(12), 1203–1218 (2008)
28. Nunnally, J.C.: Psychometric Theory, 2nd edn. McGraw-Hill, New York (1978)
29. Churchill, G.A.: A paradigm for developing better measures of marketing constructs. J. Mark.
Res. 16(February), 64–73 (1979)
30. Bagozzi, R.P., Yi, Y.: On the use of structural equation models in experimental designs.
J. Mark. Res. 271–284 (1988)
31. Fornell, C., Larcker, D.F.: Structural equation models with unobservable variables and
measurement error: algebra and statistics. J. Mark. Res. 382–388 (1981)
An Ecological Model for Digital Platforms
Maintenance and Evolution

Paolo Rocchi, Paolo Spagnoletti and Subhajit Datta

Abstract The maintenance of software products has been studied extensively in


both software engineering and management information systems. Such studies are
mainly focused on the activities that take place prior to starting the maintenance
phase. Their contribution is either related to the improvement of software quality or
to validating contingency models for reducing maintenance efforts. The continuous
maintenance philosophy suggests to shift the attention within the maintenance
phase for better coping with the evolutionary trajectories of digital platforms. In this
paper, we examine the maintenance process of a digital platform from the per-
spective of the software vendor. Based on our empirical observations, we derive an
interesting statistical relationship that has strong theoretical and practical implica-
tions in the study of software defects.

 
Keywords Software maintenance Wakeby Digital platform Complex systems 

This paper has been awarded the “Special Award Sandro D’Atri” at the XI Conference of the
Italian Chapter of AIS held in Genova (IT) on November 21st–22nd 2014.

P. Rocchi (&)  P. Spagnoletti


CeRSI-LUISS Guido Carli University, Rome, Italy
e-mail: procchi@luiss.it
P. Spagnoletti
e-mail: pspagnoletti@luiss.it
P. Rocchi
IBM, Rome, Italy
S. Datta
Singapore University of Technology and Design, Singapore, Singapore
e-mail: subhajit_datta@sutd.edu.sg

© Springer International Publishing Switzerland 2016 263


C. Rossignoli et al. (eds.), Organizational Innovation and Change,
Lecture Notes in Information Systems and Organisation 13,
DOI 10.1007/978-3-319-22921-8_21
264 P. Rocchi et al.

1 Introduction

The dynamics of organizational emergence together with the evolutionary trajec-


tories of digital infrastructures are challenging the traditional practices for managing
innovation and blurring the boundaries between strategic, structural and techno-
logical choices [1]. This is particularly true when digital platforms are in place for
supporting interactions across multiple sets of actors and, among them, of software
developers that contribute to platform evolution [2]. This has been the case for
instance of the Internet [3, 4] but also of applications, platforms and information
infrastructures owned by private companies that strategically exploit the gener-
ativity of digital technologies [5–7].
Many companies (i.e. SAP, Google, Facebook, Apple, etc.) have implemented
“third-party developer” strategies and encouraged their business partners, customers
or independent developers to come on board their computer platforms [8]. This is
also the case of those subjects who develop a software product (i.e. digital platform)
and make it available to the community of users together with its source code,
application development interfaces (API), software development kits (SDK) and
technical documentation. Such new forms of online collaboration increase the speed
of improvement and evolution of software products and challenge existing methods
for software design and maintenance.
The aim of this paper is to investigate digital platform evolution processes in
order to identify new methods for guiding the emergence of complex
socio-technical systems. Instead of considering software maintenance as a recovery
activity whose costs must be reduced adopting sophisticated methods and tech-
niques, we propose a shift towards a continuous maintenance philosophy. An
exploratory case study on the evolution of four versions of a large scale middleware
product, shows that patterns of bugs and fixes fit into an ecological model.
The paper is organized as follows: in the next section we present a literature
review on digital platform evolution. Then we discuss the case study data collection
and analysis. In the following section we highlight our observations and results,
followed by the derivation of a statistical relationship based on our results. We
conclude with a discussion and a summary of the results.

2 Related Works

In order to better position our contribution in the existing literature it is worth to


illustrate how digital platforms and their maintenance processes have been studied
so far. First of all we clarify the distinction between evolutionary and static software
systems. Second, we introduce digital platforms as a particularly relevant form of
evolutionary software systems. Third, we summarize how development and
maintenance processes and methods have been studied in the software engineering
and management information systems literature [9, 10].
An Ecological Model for Digital Platforms … 265

Static software systems, are intended as computer programs whose acceptability


on completion only depends on satisfying, in the mathematical sense, of formal
specifications. On the other hand evolutionary systems, must undergo continual
evolution to remain satisfactory and operate or address a problem or activity in the
real world [11]. Therefore, to remain satisfactory, these programs must be contin-
ually changed and updated. The acceptability of evolutionary software systems
depends on the results delivered to users and other stakeholders. They must be
continually enhanced, adapted and fixed if they are to remain effective within an
evolving application environment. Thus, the evolution of such systems is a complex
phenomenon being characterized by multi-level, multi-loop, and multi-agent
feedbacks.
In this paper we focus on digital platforms, a particular type of evolutionary
software systems. In general a platform is defined as a building block, providing an
essential function to a technological system, which acts as a foundation upon which
complementary products, technologies, or services can be developed [2, 12–14].
Therefore, digital platforms differ from other software systems in that their design
context is not fixed a priori. They have an heterogeneous and growing user base and
allow a constant generification of new IT capabilities [3, 15, 16]. In more practical
terms, digital platforms allow extensive recombination and reuse of software pro-
grams, subroutines, services, features, and content. This generativity is achieved
through the deployment of APIs, documentation, debuggers, source code examples,
and integrated development environments [8, 17, 18].
For managing these platforms, the traditional values and goals of information
systems development practices are challenged and the notion of continuous change
emerges as a new paradigm [19]. This implies that continuous analysis, negotiated
requirements, and a large portfolio of continuous maintenance activities must
replace lengthy analysis and design, user satisfaction, abstract requirements, com-
plete and unambiguous specifications, and projects in the management of emergent
organizations. An attempt to implement these principles is represented by agile
requirement engineering practices that have gained an increasing attention in the
last decade [20–23]. These methods heavily rely on feedbacks collected from the
users during the development phase and their purpose is to improve software
quality. However agile methods are still focused on minimizing the maintenance
efforts during the operational lifecycle of a software system and hence they do not
fully embrace the philosophy of continuous maintenance.
Previous studies on software maintenance processes have looked at the phe-
nomenon from different perspectives [24]. For instance, some authors have ana-
lyzed the maintenance processes of an ERP software package from the perspective
of the customers organizations [25] and have compared them with existing stan-
dards (IEEE/EIA 12207.0 maintenance-process standard) [26]. Other studies have
focused on the dynamics of community maintenance contributions enabled by the
Internet and the volunteer workforce [27]. We adopt the perspective of the software
vendor for contributing to a better understanding of how to guide the emergence of
digital platforms in complex settings.
266 P. Rocchi et al.

3 Research Strategy

An exploratory case study is conducted for investigating the evolution of a digital


platform from the perspective of the software vendor. The research design is based
on a single case with four embedded units of analysis [28]. The single case provides
the typical context of a software vendor in charge of the continuous maintenance
process of a software product during its operational lifecycle. Large scale empirical
studies of maintenance data present several challenges. In fact, defect data are not
often diligently recorded, and are seldom published for proprietary systems.
Moreover, since the software vendor is a leading multinational company the single
case allows us to conduct a revelatory case from a privileged observation point.
The four embedded units of analysis are represented by four different releases of
the same software product, a middleware application that is deployed worldwide
among more than 5000 large customer organizations. The middleware product
(XYZ) provides services for monitoring the performance of IT resources, including
disks, CPU, and applications. The XYZ helps to automatically detect bottlenecks,
and potential resource problems, and act on them proactively.
The XYZ is particularly relevant with respect to our purposes for two main
reasons. First, being an infrastructure level software it operates at an intermediate
layer between multiple digital devices configurations and multiple applications
depending on the IT infrastructure of each customer organizations. This makes the
system subject to a huge variety of external input. Second, XYZ can be considered
as a software platform in that it provides an environment for the design of new
resource models and gives customers the possibility to develop their own moni-
toring agents.

3.1 Data Sources and Analysis Methods

Empirical data were collected through direct contact with the head of the mainte-
nance team that gently provided us with archival data on software bugs and fixes,
information on the maintenance process, technical documentation, and commercial
information. A dataset with more than 2,200 defect reports over a four year period
represents the main source of data on which the following analysis is based.
Our study investigates changes to the four releases (or versions) of XYZ: B.1;
B.2; B.2.1, B.2.2. Release B.1 derives from a product developed by a company
acquired by our focal software vendor, which was delivered without further
changes. Later the focal software vendor made significant economical investments
and release B.2 derives from an effort to optimize and improve XYZ; B.2.2 rep-
resents a second major enhancement, based partially on customer feedback.
Users of XYZ who find unexpected behavior such as adverse incidents and bugs,
write requests for change (RFC) thus the acronym RFC will be used interchange-
ably with “defect” or “error” in this paper. An RFC does not demand functional
An Ecological Model for Digital Platforms … 267

changes; users raise another type of request, which we shall call “suggestions
(SUG)”, in order to vary or add a function. A single SUG proposes new or modified
functionalities.
Defects and suggestions are recorded in a special database. The data is captured
and grouped according to releases. Each release is maintained as an independent
entity; thus some failures can repeat and others are unique to a release.
Age and severity are the attributes of RFCs adopted in our statistical analysis.
The severity of a RFC denotes the impact of the corresponding error, which can be
in one of the following categories:
• Severity 1: Critical Impact—A software component which is critical for busi-
ness does not operate; or an absolutely necessary interface has failed; or an
operator is unable to use XYZ resulting in a critical impact on operations. This
condition requires an immediate solution.
• Severity 2: Significant Impact—A software component is severely restricted in
its use, causing significant business impact. This indicates that XYZ is usable
but is strongly limited.
• Severity 3: Moderate Impact—A non-critical software component is malfunc-
tioning, causing moderate business impact. This indicates the program is usable
with less significant features.
• Severity 4: Minimal Impact—A non-critical software component is malfunc-
tioning, causing minimal impact, or a non-technical request is made.
Age provides the concise and precise account of the efforts expended to
implement a change. ‘Age’ is usually called ‘time to repair (TTR)’ in current
literature and is surveyed in a variety of technical fields.
The historical data from four different releases of a XYZ are used to illustrate
how each release has evolved over time. Furthermore use time series analysis
techniques for identifying patterns in these data. Time series models assume that
events are correlated over time and the impact of other factors is progressively
captured in historical archives.

4 Observations and Results

In the following subsections, we highlight our observations and results from


studying the XYZ data across four releases.

4.1 Characteristics of Releases

The managers of XYZ allowed submission of suggestions for a specific period of


time between 10/12/2007 and 01/06/2009; during this time users recorded 143
SUGs. The majority of proposals (around 94 %) was submitted during the year
268 P. Rocchi et al.

2008 and contributed to the enhanced version B.2.2 as noted earlier. Most of SUG
(=127) have been closed during the time window of submitting recommendations,
in this way the suggestions contributed to improve and to add new functions to
releases B.2.1 and B.2.2.
Table 1 presents the start dates of the maintenance process for each release,
which is taken to be the date of the first defect being raised. The final date is taken
to be 30th September 2011, when data collection for this paper was closed. The
parameter A in Table 1 indicates the temporal range starting with the first opened
RFC and the 30th of September; the parameter B is the distance between the first
and the last opened RFC. Thus our study on various releases covers different
periods of time: the examination of release B.1 exceeds 4 years while the study of
B.2.2 covers about 2 years and half. We decided to close our survey on the 30th of
September and in this day the last RFC of B.2.2 was opened due to occasional
reason that’s why A and B coincide. Obviously, in the interests of consistency, we
considered the number of RFC submitted over the first 730 days (=2 years) after
each version has been released (Table 2); moreover we report the number of defects
which required more than 1 year for resolution, the number of severity-1 defects
and the percentage of these defects that have been closed after 30 days. All the
releases have some RFCs with zero age (that is, the number of days spent to fix a
RFC). This may indicate one of the following situations:
• A false problem was reported;
• The problem was trivial and immediately closed;
• The problem had already been addressed at the time the RFC was raised.

We notice that release B.1 has the highest number of submitted defects in the
first couple of years, the highest number of defects with age exceeding 1 year and
the highest number of severity-1 defects (Table 2).
Table 3 illustrate the increase in the size of XYZ—executable version—after
each upgrade. Generally speaking, the size of a release in megabytes can be taken to
mirror the complexity of the release’s functionality. Releases B.2 and B.2.2 are
much larger than their predecessors.
We note that B.2, B.2.1 and B.2.2 have the lowest number of defects (Table 2).
These measures indicate the higher quality of the last releases respect to B.1, and
match with the brief history of XYZ outlined in Sect. 2. Actually B.2, B.2.1 and
B.2.2 were driven by more organized and focused development efforts, instead B.1
was adopted in a cursory manner. In the present context, one reasonably concludes
that this can be a reflection of unsatisfactory development resulting in increased
maintenance efforts.
An Ecological Model for Digital Platforms … 269

4.2 Structure and Roles in the Maintenance Team

Defects are managed by a complex structure that basically includes four teams as
follows:
• First Level Team—This group analyzes the issues and addresses the problems
related to user errors or basic configurations when possible, otherwise involves
the Second Level Team. The responses of this level are fast but do not go deep
into problems.
• Second Level Team—It works with customers face to face to resolve RFCs.
Level 2 provides the customer with a solution. If, and only if, customer is
satisfied with the solution, Level 2 can close the procedure of fixing.
• Third Level Team—This level is responsible for resolving severe errors, cre-
ating fixes and making them available to users. This team assists customers in
the diagnosis of reported problems that may be product defects and for making
changes to released products in response to a RFC. This process governs the
support of the product releases from assistance request by Level 2; it recognizes
a valid problem through code changes, testing and then delivering of a fix for the
detected error.
• Development Team—This level is responsible for new features that will be
included in next product releases. In some cases provides help to Level 3 for hot
customers issues or to evaluate possible enhancements request.
The overall organization of teams is summed up in Fig. 1. It is possible to
identify two areas in the chain of operations. The ‘front-end’ includes the support
teams (Level 1 and Level 2) that have direct contacts with the client; the ‘back-end’,

Fig. 1 Flow chart of the maintenance teams


270 P. Rocchi et al.

with the teams working on the problem resolution does not have a direct interaction
with users.
The author of a RFC is required to describe the malfunctioning he or she expe-
rienced and to summarize the symptoms according to the list in Table 4. However
over 80 % of records mention the common symptom ‘program defect’. Users appear
to provide the most generic description of the problem. The frequency of symptom
#19 becomes lower when defects are serious. Users make a certain effort to scruti-
nize severity-1 failures. However the non-trivial percentage of symptom #19
(77.3 %) points out that this effort is not so great. So the lack of precision in
describing the problem by the users has less to do with the effort required, and more
influenced by users’ attitude towards reporting less than critical errors.

4.3 Statistical Analysis

We have selected four distributions of age by severity and calculated nine statistical
parameters of each distribution (Table 5). The kurtosis says that the severity-2 ages
and the severity-4 ages have distributions with a lower, wider peak around the mean;
on the other hand severity-1 and severity-3 ages show rather leptokurtic shapes. The
age mean—named mean time to repair (MTTR) in current literature—diminishes
through the groups 2, 3 and 4. Also the 50th percentile, which is the median,
decreases from severity 2 to 3. Note that the mean and the median have been com-
puted over the entire populations and not over a sample; their trends indicate that the
effort to handle a change reduces as the severity of the defect lessens. The age mirrors
the progressively reduced complexity of defects from 2 to 4, but the severity-1
problems have the lowest age mean, the lowest median and even the lowest standard
deviation. This surprising result can be explained in the following manner.
An expert usually handles a RFC with severity 2, 3 or 4 but service level agree-
ments warrant that a severity-1 problem must be resolved within 1 month (30 days).
Thus the management needs to allocate more skilled personnel to close the most
severe errors within this deadline. As we learnt, two, three or more experts work
around this kind of errors and the age-mean is the lowest in the leftmost column of
Table 4. However 80 % of age in group-1 largely exceeds 30 days (Table 2), which
means the teams which handle severity-1 problems usually miss their deadlines.

4.4 Mean Time Between Events

In general, it may be said that defect-fixing should make a sequence of independent


processes; instead repairs—correlated in a way—reveal systematic flaws in the
change management. We verified whether the age distributions of RFCs fit with the
statistical Gamma model, typical of the Poisson processes. Gamma is a
multiple-parameters family of continuous probability distributions.
An Ecological Model for Digital Platforms … 271

As change managers established special procedures to handle each RFC


according to its severity, we segregated the age into four homogeneous sets. We
used the Kolmogorov-Smirnov test to evaluate the fitness of data with the Gamma
distribution and this test was done at a 95 % level of confidence. Table 6 displays
the parameters explaining the goodness-of-fit tests, in particular the table
includes the goodness-of-fit statistic values (D), and the probability values (P). We
note how the ages of severity 2 and 3 perfectly fit with Gamma (see Fig. 6 in
Appendices) instead the higher distance D in the first and fourth row show how
these processes comply with the Poisson model at lower degree of conformity. At
far right Table 6 exhibits the most suitable values α, β and k for each group of data.
The Gamma (k, α, β) distribution models the time required for an event to occur,
given that the events occur randomly in a Poisson process with a mean time
between events of β.

4.5 Defects Distribution

It is generally observed that users normally detect several defects soon after the
product is released, and with time the number of opened RFCs comes down. We
posited that studying the distribution of defects over time could reveal some pattern
and regularity. The discussion in this section outlines our quest for a statistical law
of defect-emergence.
We examined the temporal series of defects discovered for B.1, B.2, B.2.1 and
B.2.2 releases to find the best description of these series. We performed the
Kolmogorov–Smirnov test and observed that the four series of data fit with the
Wakeby (WAK) distribution when the Kolmogorov–Smirnov test is accepted at the
99 % significance level. Table 7 shows the fitness parameters of the tests: D
(= statistic), P (= probability-value) and R (= rank). At right side Table 7 exhibits
the most suitable values of the Wakeby distributions. As R equals to 1 the Wakeby
model represents the best fit respect to the other 39 distributions although the
temporal series exhibit very different profiles. Figures from 2, 3, 4 and 5 plot the
probability density functions regarding releases from B.1 to B.2.2. Each PDF plots
the dates when the defects occurred during the range B (see Table 1). The dates
have been grouped in order to execute the Kolmogorov–Smirnov test and at the far
right of Table 7 one can find the size of the bars plotted in Figs. 2, 3, 4 and 5. This
size is expressed in days.

5 Discussion

The analysis of the defects’ time series conducted on the four versions of the
middleware product offers insights that have implications for both research and
practice. The first result is a confirmation of the contingent relationship between
272 P. Rocchi et al.

Fig. 2 PDF of 838 RFCs opened in the arch of 1471 days (Release B.1)

Fig. 3 PDF of 322 RFCs opened in the arch of 1074 days (Release B.2)

software development methods and software maintenance efforts. In fact version


B.1 was implemented by a different development team with different methods and
this has determined an higher number and higher severity of defects. This evidence
confirms the perception that level of engagement in the software development
process determines the error-proneness of the software produced. Further investi-
gation in this direction can lead to a deeper understanding of the contingent factors
and their effects. As a practical implication of this finding, software companies can
better identify the effective configurations of software development methods with
respect to the architectural complexity and degree of openness of the digital artifact
to be developed.
An Ecological Model for Digital Platforms … 273

Fig. 4 PDF of 495 RFCs opened in the arch of 975 days (Release B.2.1)

Fig. 5 PDF of 593 RFCs opened in the arch of 939 days (Release B.2.2)

As a second result, we observed that eighty percent of severity-1 RFCs requires


over 30 days for fixing the errors. This result supports the view that severe errors
cannot be resolved in less than a minimum time. This is a reflection on the fact that
communication overheads often negatively impact the time required for completing
software tasks. The time necessary to repair severe defects cannot be compressed and
thus preventive strategies often work better. Proactive maintenance is frequently less
expensive as it directs actions to rectify a failure’s potential root cause, rather than
waiting for the manifestation of errors and then addressing them. Such proactive
approach implements the continuous maintenance philosophy advocated when digital
platforms are seen as embedded into emerging organizational contexts [19, 29].
As a third result, this study revealed that software defect time series best fit the
Wakeby distribution. We found this distribution to match partial as well as entire
time series data from all the releases with high confidence levels. Such regularity
274 P. Rocchi et al.

deserves a particular attention since it opens new perspectives of further empirical


and theoretical studies. At a practical level, the Wakeby distribution can help in
supporting proactive maintenance activities by forecasting software defects. From a
theoretical perspective, it may be useful to outline some of the basic properties of
the Wakeby distribution.
The Wakeby model is one of the more recent statistical distributions. It was
defined by Harold Thomas and introduced by Houghton in 1978 [30]. The WAK
function is largely adopted to study hydrology and in particular in the area of flood
frequency analysis. Thomas defined the Wakeby distribution to account for the
‘separation effect’. In order to account for this effect a distribution is needed with
thick right-hand tail and left-hand tail. This makes the middle part of the distri-
bution function steeper than traditional skewed curves. In addition WAK separates
the calculation of the tails through β and δ that are shape parameter of the left
end-tail and of the right-end tail respectively. We remind that ξ and α are location
parameters; γ is a non-localized shape parameter.
It may be highlighted that WAK has five parameters, more than most of the
common systems of distributions. This allows for a wider variety of shapes and the
distribution is well suited to simulation of intricate physical phenomena.
Furthermore, the Wakeby distribution exhibits more stability under small pertur-
bations when compared to the Beta distribution and other more common distri-
butions. Thus Wakeby distribution is highly general; it can describe complex
events; it is robust against outliers, and it has a closed functional form for deter-
mining quantiles.
To the best of our knowledge this is the first application of the Wakeby distri-
bution in empirical software engineering. A deeper investigation on the meaning of
these parameters in the two fields can provide further insights on the dynamics of
digital platform evolution. This can lead to identify possible parallels between the
complex socio-technical phenomenon of digital platform evolution and the
behaviour of some physical, biological or social complex system.

6 Conclusion

This research contributes to the design of new managerial practices for coping with
the evolution of digital platforms. These practices, grounded in the continuous
maintenance paradigm, can be informed by new explanatory and predictive theories
derived from the analysis of empirical data.
Further empirical studies on these lines are necessary for strengthening the
external validity of our results. For instance the same statistical analysis can be
repeated on defects data taken from public sources (i.e. open source projects) or
other proprietary software packages.
An Ecological Model for Digital Platforms … 275

Appendices

See Tables 1, 2, 3, 4, 5, 6, and 7, Fig. 6.

Table 1 Time ranges of survey


Release Starting date A (days) B (days)
B.1 02/01/2007 1732 1471
B.2 10/12/2007 1390 1074
B.2.1 15/07/2008 1172 975
B.2.2 05/03/2009 939 939

Table 2 Opened and closed RFCs


Release RFCs Total Total Closed Closed Closed
opened in opened closed RFCs with RFCs with RFCs
the 1st RFCs RFCs Age ≥ 365 Age ≥ 30 with
couple of (severity 1) Age = 0
years
B.1 798 838 836 42 (5 %) 60/68 8
(99 %)
B.2 309 322 321 7 (2 %) 9/13 5
(99 %)
B.2.1 455 495 489 8 (1 %) 14/19 8
(98 %)
B.2.2 399 593 475 2 (0.4 %) 10/15 11
(80 %)
2248 93/115
(80 %)

Table 3 Sizes of releases


Release Release year Size Absolute increase Percentage increase
B.1 2006 637.168 Mb N.A. N.A.
B.2 2007 2.377 Gb 1.739 Gb +272 %
B.2.1 2008 3.214 Gb 837 Gb +48 %
B.2.2 2009 4.502 Gb 1.288 Gb +153 %
276 P. Rocchi et al.

Table 4 Symptoms of defects


Symptoms Severity → 1 2 3 4 Total
1 Pgm The program XYZ 1 – – – 1
suspended suddenly hangs or
freezes
2 Lost data Data are lost when – 2 – – 2
XYZ is running
3 Reliability XYZ is not robust – 3 – – 3
enough and breaks
easily
4 Test failed The system test— – 1 2 – 3
conducted to validate a
patch—failed
5 Not to spec No precise symptom of – 2 1 1 4
the failure can be
specified
6 Performance The performances of – 1 3 1 5
one or more functions
of XYZ are inadequate
7 Non-standard The problem rises 1 4 2 – 7
randomly
8 Obsolete A module of XYZ 2 5 – – 7
code applies an algorithm
which is obsolete
9 Core dump The program XYZ 1 5 1 – 7
encounters a sudden
failure or outages
without any error
message
10 Design The failure of XYZ is – 1 6 – 7
wrong due to a design error
11 Plans There is a discrepancy – 9 1 – 10
incorrect between the functions
planned for XYZ and
the functions required
by the user
12 Intg. problem There are conflicts – 7 4 – 11
between XYZ and a
program running in the
system
13 Install failed The problem rises – 3 11 2 16
during the XYZ
installation phase.
14 Usability Operators find difficult 3 9 13 3 28 (1 %)
to use XYZ
15 Docs The documentation of 4 23 7 4 38 (1 %)
incorrect XYZ is erroneous
(continued)
An Ecological Model for Digital Platforms … 277

Table 4 (continued)
Symptoms Severity → 1 2 3 4 Total
16 Function It seems necessary to 5 36 17 1 59 (2 %)
needed perfect or to add a new
operation to XYZ
17 Build failed An error occurs during – 49 14 2 65 (3 %)
the compilation of
XYZ and/or the linker
phase
18 Incorrect I/O Malfunctions occur 9 78 46 1 134 (6 %)
during an i/o operation
e.g. XYZ displays a
panel
19 Program A program error occurs 89 912 762 78 1841 (81 %)
defect

Table 5 Closed defect per severity


Age↓ Severity→
1 2 3 4
Minimum 0 0 0 0
Maximum 427 561 750 346
Std deviation 84.85 98.48 111.55 88.48
Mean 89.83 110.39 106.66 104.45
Kurtosis 3.71 1.74 4.91 −0.05
Skewness 1.89 13.945 1.90 0.89
25th percentile 36 40 24 33
50th percentile 65 79 75 83
75th percentile 120 156 144 166
Number of defects 115 1150 890 93 2248 Grand Total

Table 6 The Gamma Severity D P k α β


distribution parameters
1 0.064 0.693 0.74806 2.2175 28.791
2 0.030 0.244 1.00000 1.2564 87.863
3 0.042 0.073 1.00000 0.9142 116.67
4 0.071 0.705 1.00000 1.1204 95.272
278

Table 7 The Wakeby distribution parameters of the entire data sets


Release R D P α β γ δ ξ Segment (days)
B.1 1 0.01986 0.88908 261.35 0.82955 121.3 0.10885 39082.0 73.5
B.2 1 0.02816 0.95414 803.23 4.4198 228.77 −0.21498 39460.0 71.6
B.2.1 1 0.0205 0.98272 742.2 5.5649 333.95 −0.44585 39749.0 75.0
B.2.2 1 0.04052 0.27695 6.7581E+8 16826.0 676.17 −1.0297 0 85.36
P. Rocchi et al.
An Ecological Model for Digital Platforms … 279

Fig. 6 PDF of the ages (Severity 1, 2, 3 and 4)

References

1. Resca, A., Za, S., Spagnoletti, P.: Digital platforms as sources for organizational and strategic
transformation: a case study of the Midblue project. J. Theor. Appl. e-Commerce Res. 8, 71–
84 (2013)
2. Spagnoletti, P., Resca, A., Lee, G.: A design theory for digital platforms supporting online
communities: a multiple case study. J. Inf. Technol. 1–17 (2015)
3. Hanseth, O., Lyytinen, K.: Design theory for dynamic complexity in information
infrastructures: the case of building internet. J. Inf. Technol. 25, 1–19 (2010)
4. Marsden, C.T.: Net Neutrality: Towards a Co-regulatory Solution. Bloomsbury Academic,
London (2013)
5. Zittrain, J.: The generative internet. Harv. Law Rev. 119, 1975–2040 (2006)
6. Rossignoli, C., Zardini, A., Benetollo, P.: The process of digitalisation in radiology as a lever
for organisational change: the case of the Academic Integrated Hospital of Verona. DSS
2.0-Supporting Decision Making With New Technologies, p. 261 (2014)
7. Vom Brocke, J., Braccini, A.M., Sonnenberg, C., Spagnoletti, P.: Living IT infrastructures—
an ontology-based approach to aligning IT infrastructure capacity and business needs. Int.
J. Account. Inf. Syst. 15, 246–274 (2014)
8. Boudreau, K.J.: Let a thousand flowers bloom? An early look at large numbers of software app
developers and patterns of innovation. Organ. Sci. 23, 1409–1427 (2011)
9. Vom Brocke, J., Simons, A., Sonnemberg, C., Agostini, P.L., Zardini, A.: Value assessment of
enterprise content management systems: a process-oriented approach. In: D’Atri, A., Saccà, D.
(eds.) Information Systems: People, Organizations, Institutions, and Technologies, pp. 131–
138. Physica-Verlag, Heidelberg (2010)
10. Magni, M., Provera, B., Proserpio, L.: Individual attitude toward improvisation in information
systems development. Behav. Inf. Technol. 29, 245–255 (2010)
280 P. Rocchi et al.

11. Lehman, M.M., Ramil, J.F.: Rules and tools for software evolution planning and management.
Ann. Softw. Eng. 11, 15–44 (2001)
12. Gawer, A.: Platforms, Markets and Innovation. Edward Elgar Publishing, Cheltenham (2009)
13. Sorrentino, M., Virili, F.: Web services and value generation in the public sector. Electron.
Gov. 489–495 (2004)
14. Spagnoletti, P., Resca, A.: A design theory for IT supporting online communities. In:
Proceedings of the 45th Hawaii International Conference on System Sciences, pp. 4082–4091
(2012)
15. Williams, R., Pollock, N.: Software and Organisations—The Biography of the
Enterprise-Wide System or How SAP Conquered the World. Routledge, London (2008)
16. Vitari, C., Piccoli, G., Mola, L., Rossignoli, C.: Antecedents of IT dynamic capabilities in the
context of the digital data genesis. In: ECIS 2012: The 20th European Conference on
Information Systems (2012)
17. Spagnoletti, P., Federici, T.: Exploring the interplay between FLOSS adoption and
organizational innovation. Commun. Assoc. Inf. Syst. 29, 279–298 (2011)
18. Yoo, Y., Boland, R.J., Lyytinen, K., Majchrzak, A.: Organizing for innovation in the digitized
world. Organ. Sci. 23, 1398–1408 (2012)
19. Truex, D., Baskerville, R., Klein, H.: Growing systems in emergent organizations. Commun.
ACM 42, 117–123 (1999)
20. Ramesh, B., Cao, L., Baskerville, R.: Agile requirements engineering practices and challenges:
an empirical study. Inf. Syst. J. 20, 449–480 (2007)
21. Lee, G., Xia, W.: Toward agile: an integrated analysis of quantitative and qualitative field data
on software development agility. MIS Q. 34, 87–114 (2010)
22. Pino, F.J., Ruiz, F., Garcia, F., Piattini, M.: A software maintenance methodology for small
organizations : Agile MANTEMA. J. Softw. Maint. Evol. Res. Pract. 24, 851–876 (2012)
23. Subramanyam, R., Ramasubbu, N., Krishnan, M.: In search of efficient flexibility: effects of
software component granularity on development effort, defects, and customization effort. Inf.
Syst. Res. 23, 787–803 (2012)
24. Hirt, S.G., Swanson, E.B.: Emergent maintenance of ERP: new roles and relationships.
J. Softw. Maint. Evol. Res. Pract. 13, 373–387 (2001)
25. Caporarello, L., Viachka, A.: Individual readiness for change in the context of enterprise
resource planning system implementation. In: Proceedings of the 6th Conference of the Italian
Chapter for the Association for Information Systems, pp. 89–96 (2010)
26. Ng, C., Gable, G.: Maintaining ERP packaged software: a revelatory case study. J. Inf.
Technol. 25, 65–90 (2009)
27. Moon, J.Y., Sproull, L.S.: The role of feedback in managing the internet-based volunteer work
force. Inf. Syst. Res. 19, 494–515 (2008)
28. Yin, R.K.: Case Study Research: Design and Methods. Sage Publications, Thousand Oaks
(2009)
29. Pennarola, F., Caporarello, L.: Enhanced class replay: will this turn into better learning? In:
Wankel, C., Blessinger, P. (eds.) Increasing Student Engagement and Retention Using
Classroom Technologies: Classroom Response Systems and Mediated Discourse
Technologies, pp. 143–162. Emerald Group Publishing Limited, Bradford (2013)
30. Houghton, J.C.: Birth of a parent: the Wakeby distribution for modeling flood flows. Water
Resour. Res. 14, 1105–1109 (1978)

You might also like