Professional Documents
Culture Documents
Organizational Innovation and Change - Managing Information and Technology
Organizational Innovation and Change - Managing Information and Technology
Cecilia Rossignoli
Mauro Gatti
Rocco Agrifoglio Editors
Organizational
Innovation and
Change
Managing Information and
Technology
Lecture Notes in Information Systems
and Organisation
Volume 13
Series editors
Richard Baskerville, Decatur, USA
Marco De Marco, Roma, Italy
Nancy Pouloudi, Athens, Greece
Paolo Spagnoletti, Roma, Italy
Dov Te’eni, Tel Aviv, Israel
Jan vom Brocke, Vaduz, Liechtenstein
Robert Winter, St. Gallen, Switzerland
More information about this series at http://www.springer.com/series/11237
Cecilia Rossignoli Mauro Gatti
•
Rocco Agrifoglio
Editors
Organizational Innovation
and Change
Managing Information and Technology
123
Editors
Cecilia Rossignoli Rocco Agrifoglio
Department of Business Administration Department of Management,
University of Verona Accounting and Economics
Verona University of Naples “Parthenope”
Italy Naples
Italy
Mauro Gatti
Department of Management
University of Rome “La Sapienza”
Rome
Italy
v
vi Contents
This book explores a range of critical issues and emerging topics relevant to the
linkages between information technology and organizational systems. It encourages
debate and opens up new avenues of inquiry in the field of Information Systems,
organization and management studies, by investigating themes of growing research
interest from multiple disciplinary perspectives such as organizational innovation and
impact, information technology, innovation transfer, and knowledge management.
The title of this book, ‘Managing Information and Technology for
Organizational Innovation and Change’, already implies the understanding that
information and technology are two crucial factors for developing innovation and
for managing change within organizational contexts. Information and technology
were widely recognised by the managerial literature as a major source of com-
petitive advantage and increased business performance [1, 2]. In the last decades,
organizations have increasingly invested in Information and Communication
Technology (ICT) for improving their efficiency and effectiveness and thus for
providing an opportunity for their businesses. Indeed, ICTs were often recognized
as a way to develop organizational innovation and to lead organizational change
[3–6]. However, empirical experience has immediately shown that ICT adoption is
C. Rossignoli
Department of Business Administration, University of Verona, Verona, Italy
M. Gatti
Department of Management, University of Rome “La Sapienza”, Rome, Italy
R. Agrifoglio (&)
Department of Management, Accounting and Economics, University of Naples “Parthenope”,
Naples, Italy
e-mail: agrifoglio@uniparthenope.it
a necessary but not sufficient condition for improving individual, group and
organizational performance, so opening the academic debate on the relevance of
managing information and technology within an organizational setting.
In respect of other disciplines, the IS literature was often reluctant to generalize
the relationships between information technology and organizational change.
Building upon the research of Pfeffer [7], Markus and Robey [5] and Orlikowski
[8], it is well-known that organizational change could be caused by information
technology—the so-called technological imperative, by the motives and actions of
information technology designers aimed at satisfying manager’s information pro-
cessing needs—the organizational imperative, and by the interaction between
information technology and its human and organizational users—the emergent
perspective. Thus, technology is both an external force influencing organizational
structure and the outcome of managers’ strategic choices and social actions. On the
other hand, as Orlikowski [8] stated, the link between technology and organizations
is affected by the human actions and by the socio-historical context where tech-
nology is developed and used.
Recognizing the existence of such different paradigms, this volume stresses the
relationships between ICT, organizational innovation and change and looks to
enhance their ties. On the other side, it also explores the role of information and
knowledge within organizational settings by emphasizing the contribution of ICT in
knowledge management activities.
The volume is divided into II sections, each one focused on a specific theme
such as ‘ICT, organizational innovation and change’ and ‘ICT and knowledge
management’. The content of each section is based on a selection of the best papers
(original double blind peer reviewed contributions) presented at the annual con-
ference of the Italian chapter of AIS, held in Genoa, Italy, in November 2014.
This section explores the relationships between ICT, organizational innovation, and
change. The aim of this section is to investigate the factors leading individuals and
organizations towards ICT’s adoption and usage, as well as the effects of such
technologies on working practices, interaction and communication between people,
and the organizational structure.
ICTs are part of corporate transformations in today competitive environments,
often enabling new organizational forms and business models both in the Public
and Private Sectors. Organizations expect to use the new ICT to run new processes,
innovate products and services, reduce operating costs, and improve business
management aimed at transforming their internal structures into better achieving
organizations. The adoption and usage of ICT is usually accompanied by rede-
signing the business processes and changes in the organizational structure.
Empirical evidence and academic literature have widely shown that the effective
implementation of new ICT is one of the most challenging tasks faced by managers,
Introducing and Discussing Information and Technology Management … 3
since it requires people to understand, absorb and adapt to the new requirements [9,
10]. Managers often consider the implementation and adoption of ICTs as a way for
promoting and realizing organizational and managerial changes [11–14]. However,
organizational change does not only arise from ICT adoption and usage, but also
depends upon a combination of technical and social influences which cannot always
be controlled [15, 16]. Indeed, the success or failure of ICT implementation and
adoption are mediated by a number of factors, many of which require an in-depth
understanding of the organizational context and human behaviour [10, 17–21].
This part of the volume has 10 contributions aimed at exploring the interplays
between ICT, organizational innovation and organizational change, by using dif-
ferent methodologies, theories and approaches. These researches stress the role of
ICT, discussing the limiting and encouraging factors in technology adoption and
usage and the effects of such technology on organizations arising from the inter-
action with human choices and institutional properties.
Spagnoli, Bellini, and Ghi’s paper aims to develop a methodology for evaluating
the economic, social, legal and environmental impacts of a cloud computing ini-
tiatives in the Italian PA and, in particular, in the Ministry of the Economic
Development.
Castelnovo, Sorrentino, and De Marco explore a new e-government initiative in
Italy, named municipal One-Stop Business Shops (SUAPs), developed and laun-
ched by Italian legislator in 1998 for simplifying government relations with busi-
ness and industry.
Spinelli analyzes the literature on IT adoption in SMEs and combines per-
spectives from various research streams in order to identify its determinants—
barriers and incentives. The paper explores well-established research areas and aims
at highlighting links which are underdeveloped or ignored, and provides directions
for future research.
Marchegiani and Rossi’s paper also explores the interplay between technology
and organizational change, but focusing on the effects of recent technological
innovations on the valorization of cultural heritage. This research is aimed at
identifying the sense-making that each actor confers to the technological innova-
tions, and its impact on cultural heritage valorization.
Zardini, Rossignoli and Campedelli, instead, explore the interplay between ICT
and organization within a peculiar sector of Italian PA, such as the healthcare
sector. Using the Zaharia and colleagues framework, the study investigates the
impacts of Electronic Medical Record’s (EMR) implementation in an Italian uni-
versity hospital.
Ennas, Marras and Di Guardo investigate the trends in microprocessor market in
order to understand if competition between rival technologies can be reopened after
a dominant paradigm occurs. The results show the existence of a non-conventional
S-curve trend.
Depaoli, Resca, De Marco and Rossignoli aim to assess Claudio Ciborra’s
legacy of Information Systems Studies and Organizational Studies. Comparing
Ciborra’s seminal work, ‘The Labyrinths of Information’, with papers published in
4 C. Rossignoli et al.
four top IS journals, the research shows that Ciborra’s thinking contributed to the
swing toward a more praxis-oriented attitude in the IS discipline.
Based on the social innovation literature, and digital social innovation in par-
ticular, Passani, Spagnoli, Bellini, Prampolini and Firus’s paper analyzes the social,
economic, political and environmental impacts of the Collective Awareness
Platform for Sustainability (CAPS) by using an ad hoc methodology, such as
IA4SI, developed for assessing the projects related to digital social innovations.
Pozzi, Pigni, Vitari, Buonanno, and Raguseo conduct a literature review on the
business model studies in the IS discipline. Using an electronic search, the paper
provides an overview of business model studies in IS field, highlighting the main
research streams and limitations.
Finally, using a case-study method, the paper of Makhlouf and Allal-Cherif
explores the consequences of simultaneous implementation of different process
approaches in Telkom. The research is aimed at analyzing the contributions of the
implementation of these approaches and problems resulting concerning governance,
agility and strategic flexibility.
This section explores the relationship between ICT and knowledge management.
The aim is to investigate how individuals, groups and organizations manage
information and knowledge and which technologies enable them to run this process
more efficiently.
The literature has widely recognized knowledge as a strategic asset for organi-
zational growth and sustained competitive advantage [9, 22–26]. Nowadays,
organizations view knowledge as a crucial resource, a key for survival and success
mainly due to high competition and increasingly dynamic environments. Unlike
before, the business complexity and the growth in information volume, velocity,
and variety have significantly increased the difficulties for individuals in managing
knowledge activities within organizational settings [9, 27]. People need advanced
effective methods and tools to take advantage of the ways that knowledge is
acquired and exploited within organizations [28, 29]. In order to face knowledge
management issues, software houses and vendors have designed various platforms
enabling organizations to develop, share and access huge quantities of available
resources from internal and external sources [30]. Recently, organizations are often
looking for new ways and tools to acquire knowledge from outside [31, 32].
Communities of practice and cloud, social and mobile platforms are some examples
[33–35].
This part of the volume has 10 contributions aimed at exploring the interplays
between information, technology, and knowledge management. Using different
methodologies, theories and approaches, these researches stress the different con-
cepts and meanings of information and knowledge, discussing the role of various
Introducing and Discussing Information and Technology Management … 5
Raguseo, Vitari and Pozzi, instead, explore the relationship between ICT and
knowledge management, focusing on a peculiar platform for generating and cap-
turing data natively in digital form, integrating this data in the appropriate business
processes, and effectively managing data once produced. In particular, this research
investigates whether the development of the Digital Data Genesis dynamic capa-
bility in firms leads to valuable outputs in terms of data quality and data
accessibility.
Finally, Rocchi, Spagnoletti and Datta investigate digital platforms with par-
ticular reference to their maintenance process from the perspective of the software
vendor. The paper aims to explore the digital platform evolution processes in order
to identify new methods for guiding the emergence of complex socio-technical
systems.
References
1. Porter, M.E.: Technology and competitive advantage. J. Bus. Strategy 5(3), 60–78 (1985)
2. Melville, N., Kraemer, K., Gurbaxani, V.: Information technology and organizational
performance: an integrative model of IT business value. MIS Q. 28(2), 283–322 (2004)
3. Orlikowski, W.J.: CASE tools as organizational change: investigating incremental and radical
changes in systems development. MIS Q. 17(3), 309–340 (1993)
4. Orlikowski, W.J.: Improvising organizational transformation over time: a situated change
perspective. Inf. Syst. Res. 7(1) (1996)
5. Markus, M.L., Robey, D.: Information technology and organizational change: causal structure
in theory and research. Manag. Sci. 34(5), 583–598 (1988)
6. Ricciardi, F., Rossignoli, C., Zardini, A.: Factors influencing the strategic value of IT: a
literature review. In: Jun, Y. (ed.) Humanities, social sciences and global business
management. Singapore Management and Sport Science Institute, Singapore (2012)
7. Pfeffer, J.: Organizations and organization theory. Pitman, Marshfield (1982)
8. Orlikowski, W.J.: The duality of technology: rethinking the concept of technology in
organizations. Organ. Sci. 3(3), 398–427 (1992)
9. Gatti, M.: Cultura d’impresa, innovazione e conoscenza. In: Brondoni, S.M. (ed.)
Market-driven management, concorrenza e mercati globali. Giappichelli, Torino (2007)
10. Magni, M., Pennarola, F.: Intra-organizational relationships and technology acceptance. Int.
J. Inf. Manag. 28(6), 517–523 (2008)
11. Rossignoli, C.: Coordinamento e cambiamento. Tecnologie e processi interorganizzativi,
FrancoAngeli (2004)
12. Agrifoglio, R., Metallo, C.: ERP acceptance: the role of affective commitment. In: D’Atri, A.,
De Marco, M., Braccini, A.M., Cabiddu, F. (Eds.) Management of the interconnected world.
Springer, Berlin (2010)
13. Metallo, C.: L’evoluzione dei sistemi informativi: un’analisi nei contesti information-intensive.
ARACNE editrice, Roma (2011)
14. Mola, L., Pennarola, F., Za, S.: From information to smart society: environment, politics and
economics. Lecture Notes in Information Systems and Organisation (LNISO), vol. 5. Springer,
Berlin (2015)
15. Robey, D., Sahay, S.: Transforming work through information technology: a comparative case
study of geographic information systems in county government. Inf. Syst. Res. 7(1), 93–110
(1996)
Introducing and Discussing Information and Technology Management … 7
16. Giustiniano, L., Bolici, F.: Organizational trust in a networked world: analysis of the interplay
between social factors and information and communication technology. J. Inf. Commun.
Ethics Soc. 10(3), 187–202 (2012)
17. Davis, F.D., Bagozzi, R.P., Warshaw, P.R.: User acceptance of computer technology: a
comparison of two theoretical models. Manag. Sci. 35(8) (1989)
18. Venkatesh, V., Morris, M.G., Davis, G.B., Davis, F.D.: User acceptance of information
technology: toward a unified view. MIS Q. 27(3), 425–478 (2003)
19. Braccini, A.M.: Does ICT influence organizational behaviour? An investigation of digital
natives leadership potential. In: Spagnoletti, P. (Ed.) Organizational change and information
systems. Lecture Notes in Information Systems and Organisation, vol. 2, pp 11–19 (2013)
20. Agrifoglio, R., Metallo, C., Black, S., Ferrara, M.: Extrinsic versus intrinsic motivation in
continued Twitter usage. J. Comput. Inf. Syst. 53(1), 33–41 (2012)
21. Agrifoglio, R., Metallo, C., Lepore, L.: Success factors for using case management system in
Italian courts. Inf. Syst. Manag. (In Press)
22. Nonaka, I.: A dynamic theory of organizational knowledge creation. Organ. Sci. 5(1), 14–37
(1994)
23. Miller, D., Shamsie, J.: The resource-based view of the firm in two environments: The
Hollywood firm studios from 1936 to 1965. Acad. Manag. J. 39(3), 519–543 (1996)
24. Teece, D.J.: Capturing value from knowledge assets: the new economy, markets for
know-how, and intangible assets. Calif. Manag. Rev. 40(3), 55–79 (1998)
25. Alavi, M., Leidner, D.E.: Knowledge management and knowledge management systems:
conceptual foundations and research issues. MIS Q. 25(1), 107–136 (2001)
26. Schultze, U., Leidner, D.E.: Studying knowledge management in information systems
research: discourses and theoretical assumptions. MIS Q. 26(3), 213–242 (2002)
27. Malhotra, Y.: Integrating knowledge management technologies in organizational business
processes: getting real time enterprises to deliver real business performance. J. Knowl. Manag.
9(1), 7–28 (2005)
28. Spagnoletti, P., Resca, A.: The duality of information security management: fighting against
predictable and unpredictable threats. J. Inf. Syst. Secur. 4(3) (2008)
29. Rossignoli, C., Mola, L., Cordella, A.: Reconfiguring interaction through the e-marketplace: a
transaction cost theory based approach. In: Dwivedi, Y., Lal, B., Williams, M., Schneberger,
S., Wade, M. (eds.) Handbook of research on contemporary theoretical models in information
systems, pp. 311–324. Information Science Reference, NY (2009)
30. Zardini, A., Mola, L., Vom Brocke, J., Rossignoli, C.: The role of ECM and its contribution in
decision-making processes. J. Decis. Syst. 19(4) (2010)
31. Lindkvist, L.: Knowledge communities and knowledge collectivities: a typology of knowledge
work in groups. J. Manag. Stud. 42(6), 1189–1210 (2005)
32. Handley, K., Sturdy, A., Fincham, R., Clark, T.: Within and beyond communities of practice:
making sense of learning through participation, identity and practice. J. Manag. Stud. 43(3),
641–653 (2006)
33. Alvino, F., Agrifoglio, R., Metallo, C., Lepore, L.: Learning and knowledge sharing in virtual
communities of practice: a case study. In: D’Atri, A., Ferrara, M., George, J.F., Spagnoletti,
P. (Eds.) Information technology and innovation trends in organizations. Springer, Berlin
(2011)
34. Francesconi, A., Bonazzi, R., Dossena, C.: Solar system: a new way to model online
communities for open innovation. In: Spagnoletti, P. (Ed.), Organizational change and
information systems. Lecture Notes in Information Systems and Organisation, vol. 2,
pp. 205–214 (2013)
35. Schiavone, F., Metallo, C., Agrifoglio, R.: Extending the DART model for social media. Int.
J. Technol. Manag. 66(4), 271–287 (2014)
Part I
ICT, Organizational Innovation
and Change
A Methodology for the Impact Assessment
of a g-Cloud Strategy for the Italian
Ministry of the Economic Development
Abstract The paper has the objective to provide a methodology for the
socio-economic, technological and environmental impact assessment of a Cloud
Computing strategy for the Italian Ministry of the Economic Development and
more specific at the service of the Department for Communications. In order to
develop a detailed and tailored model for implementing the g-Cloud strategy, we
analyse the current services and functions performed by the Department for
Communications of the Italian Ministry of the Economic Development, showing
the current ways of managing information flows within and outside the adminis-
tration. Starting from the available background analysis on the current state of the
art of the adoption of g-Cloud services in Europe and USA, we provide assump-
tions and hypotheses for the definition of the g-Cloud Strategy. We then compare
the requirements provided by the General Directorates of the Department for
Communications of the Italian Ministry of the Economic Development in order to
validate the hypotheses previously defined. By reviewing the approaches for the
impact assessment available from literature review, we define the best effective
methodology for assessing the potential impacts of g-Cloud strategies. The meth-
odology considers four areas of impact: economic, social, legal and environmental
impacts. For each area of impact we identify specific indicators for the assessment
of efficiency and effectiveness of Cloud Computing initiatives in the Italian PA that
have been validated by a set of Cloud Computing experts.
The European Economic and Social Committee, on January 20, 2011, decided to
draw up an opinion on the subject: “Cloud Computing in Europe” [1], in accor-
dance with art. 29, paragraph 2 of the Rules of Procedures. Based on Europe 2020
strategy [2], and in particular on the Digital Agenda, the European Economic and
Social Committee (EESC) primarily aimed to gather and share experiences
developed by stakeholders and the market in the Cloud Computing field. The
opinion had also the objective of formulating a series of recommendations to
encourage Europe to position itself at the head of this promising field, with the help
of leading companies. The opinion highlighted potential economic benefits and
weaknesses of the Cloud Computing technologies, which are mainly due to a lack
of maturity. With reference to the economic model of Cloud infrastructures, the
Economic and Social Committee supported the following elements as the most
relevant for the full development of the economic model: a larger number of
potential users, the sharing and optimization of resources, the users mobility, the
easy, flexible and transparent integration of the technical components, the distri-
bution of costs throughout the complete life cycle of the technology, the focus on
the core business and the growth opportunities offered by the creation of new fields
of activity. Instead, at European level, the weaknesses of Cloud technologies are
mostly related to the lack of a core governance structure, the multiplicity of reg-
ulations, the lack of reference points to support the users to evaluate the potential
risks, the fragility and the saturation of internet and servers, the risks related to
outsourcing and relocation of data and processes in other countries with a different
legal system, the complexity of the contracts available. However, the European
Union understands the importance of Cloud Computing strategies in order to
operate on a promising and strategic market. With specific reference to the adoption
of Cloud Computing in the Public Administration, the Committee states that these
technologies are fully legitimized in the general austerity context, as they do not
require huge initial capital investments. Furthermore, public investments could
generate a leverage effect by encouraging private national and European telecom-
munications operators to invest in Cloud Computing technologies.
Vivek Kundra, CIO of the US Government, is the creator of the Federal Cloud
turning point [3], a first step for the technological modernization process that will
generate greater efficiency and transparency on the US government. Kundra is the
A Methodology for the Impact Assessment of a g-Cloud Strategy … 13
head of strategic IT investments plans with a federal budget of over $70 million/
year. Hence, the US government stands as the largest buyer of technology in the
world. The US Government has developed a “Federal Cloud Computing Strategy”
on 8 February 2011 with the aim to provide guidance to federal agencies on
complying with the Cloud first strategy. The choice to turn to Cloud Computing
technologies has been strongly supported by Obama, in order to reduce the gov-
ernment operation costs and make it safer, open and flexible. The expected number
of IT services that will migrate to the Cloud are about $20 billion out of $80 billion
broken down by individual agency, mainly based on private Cloud deployments.
The decision framework for the migration of the US Government to Cloud tech-
nologies is based on three processes: selection, to analyse and identify the IT
services to move and the time; provision to aggregate the demand, ensure inter-
operability and integration with IT portfolio, provide security contracts, repurpose
legacy assets and redeploy freed resources; management, to shift IT mindset from
assets to services, build new skill sets, monitor the compliance of the provider with
SLAs and re-evaluate vendor and service models.
Within these processes, Kundra has first identified the IT operations that had not
produced relevant results, to redirect $25 million to more profitable activities. The
Cloud transformation has not only affected the technologies, but also the cultural
and organizational processes of the US government. The processes started by the
US government arise coherently in the broader dematerialization strategy and
encourages the creation of new service delivery models. Within this context, it will
be developed the Data.gov site that will gather and make available the information
of the US government. Actually, the US government budget for the migration to
Cloud Computing technologies is of $19 billion dollars. The US Government Cloud
Computing strategy is aimed at changing how the institution thinks.
According to the Global Cloud Computing Scorecard [4] developed by the Business
Software Alliance, which drafted a global ranking of countries prepared to deploy
and use Cloud technologies, Italy is third in Europe and sixth in the World. In the
first positions of the Global Cloud Computing Scorecard are Japan, USA, France,
Germany and Australia. The research was based on several indicators, mainly related
to the quality of infrastructures and effectiveness of the Italian legislations in terms of
Cloud Computing cybercrime and privacy security. A negative element of the Italian
government for the full adoption of Cloud Computing technologies is constituted by
the slow bureaucracy, for instance legislation on the digital signature while is in line
with the international standards, often encountering problems in its application.
Unfortunately, in terms of adoption of Cloud Computing technologies for the Public
Administration, we have no positive data. Indeed, Italy is in delay in adopting
infrastructures, platforms and applications residing on the network, rather than on
corporate servers. In addition to the physiological delay related to the decision, there
14 F. Spagnoli et al.
is also the lack of a central governance. Compared with the growing attention that the
US federal government is devoting to the optimization of technological resources,
and the adoption of Cloud Computing technologies in Northern Europe, in Italy we
are far behind. The Italian Cloud and ICT as a Service Observatory of the
Polytechnic Institute of Milan [5] analysed since 3 years ago the evolution of Cloud
Computing in Italy through an empirical ad hoc research involving 35 Public
Administration, in-house companies validated the results of the surveys. According
to the Observatory, the Cloud infrastructure could be very useful for the Italian
Government in order to reduce costs and the inefficiencies of the current systems to
move to a new IT paradigm, to lower the critical mass of investments and skills
required, also allowing the smaller Governments to access and benefit from a
widespread digitisation. However, the analysis of the current technological situation
of the Italian Government shows a fragmented infrastructure that is inefficiently
handled. Looking at the current Data Center scenario, an important source of cost
and complexity is involved in managing the IT infrastructure, as the central
Government has 1033 Data Centers, plus 3000 Data Centers of local Governments.
The hardware of these Data Centers is managed unevenly and is used only for a
fraction, with an use of the virtualisation techniques only for a 25 % of its potential.
Consequently, the IT spending, although not high in absolute terms, is inefficient and
is hiding management costs approximately per 1 billion euro a year in human
resources management and the energy expenditure is estimated at 270–300 million
euro. According to the Italian Observatory, by following a rationalisation scenario
and considering these three main aspects, in five years, the Italian Government could
achieve a saving of 3.7 billion euro. Moreover, if the local Governments will start to
use the virtualisation techniques more widely, they will overcome the 1 server—
1 application paradigm and benefits could grow to 5.6 billion euro. The process of
rationalisation of resources through the Cloud infrastructure will require a set of
actions, including the most important that is the rationalisation of the infrastructure
(Data Centers) to guarantee medium-term returns easy to quantify, removing the
scepticism and pushing the Governmental actors to action. However in Italy, during
the 2012, several positive initiatives were initiated for the adoption of g-Cloud
infrastructures. For instance, one best practice is related to the Health sector, where
the debate is more active. Indeed, several Local Health Authorities (ASL) tested
online payments solutions and are adopted Cloud technologies (ULSS of Asolo).
In order to correctly analyse the services and functionalities of the Department for
Communications it is required to identify the different organization charts of the
Department, which is constituted by 3 General Directorates, the Institute of
A Methodology for the Impact Assessment of a g-Cloud Strategy … 15
The General Directorate for planning and management of the radio spectrum is
aimed at allocating frequency band to the different radio-electrical companies,
managing the allocation of frequencies to station of different services, protecting
duly authorized services through the monitoring and control of the radio spectrum.
The Directorate manages the radio spectrum through a coordination and technical
assistance process for the resolution of specific problems with the collaboration of
the Regional Inspectorates and the National Center for the control of radio fre-
quency emissions, that is a body set up within the International Telecommunicaton
Union in the field of Communications.
The General Directorate for the regulation of the postal sector establishes the
conditions, prices and tariffs of services, defines the quality level of the postal
16 F. Spagnoli et al.
service and verifies the compliance of Poste Italiane spa, responsible for the pro-
vision of the service, applying penalties for breaches. This Directorate also sets the
“Program Contract” with Poste Italiane spa. in order to regulate the relationship
between the parties, ensures the compliance with the obligations of the service
provision and participates in the work of international and European organisations.
According to the commonly agreed approach [6], the methodology for the impact
measurement we are proposing, is focusing on the inputs, outputs, outcomes and
impacts approach, where:
• Inputs are the investments made in, or the resources required to produce a
product or develop/undertake an activity.
• Outputs are the products or services provided (e.g. number of services created,
papers published, events held, etc.).
• Outcomes are the immediate changes resulting from an activity—these can be
intentional or unintentional, positive or negative (e.g. employment, increased
usability and personalisation).
• Impacts are the net difference made by an activity after the outputs interact with
society and the economy (e.g. higher and easier access to cloud services in new
member countries leading to the increment of local human resources) (Tables 1,
2 and 3).
The methodology presented in this chapter is based on a quali-quantitative
approach to impact assessment and builds on the principles of the Cost-Benefit
analysis [7, 8] and of the Multi-Criteria analysis [9]. These two methods are seen as
complementary to one another, as they help framing both impacts that can be
represented in a monetary form, as well as impacts that are better described in
A Methodology for the Impact Assessment of a g-Cloud Strategy … 17
Italian PA. Below in the following tables we provide a list of indicators that could
be used in assessing outputs efficiency (Tables 4 and 5).
Once the indicators are defined for measuring the impacts of the identified
assessment categories, the third assessment step consists in measuring the related
costs and benefits. In consistence with the principles of cost-benefit analysis, the
evaluation of the benefits generated by a project/strategy may be evaluated by
identifying the willingness that the society has to pay for obtaining that positive
impact. The final assessment of a project/strategy efficiency, will be made by using
the following indicators:
• Economic net present value (ENPV*) perceived: the difference between the
discounted total economic benefits and costs. The benefits will be evaluated as
(1) the total willingness to pay of the users (i.e. the average willingness to pay of
the users multiplied for the total number of users), (2) the average time savings
(in hours) per user multiplied for the average hourly salary of
20 F. Spagnoli et al.
During a first round of interviews with a set of experts, the indicators previously
mentioned were validated and consistently reduced, in order to provide to a second
group of experts, only the indicators that can be effectively relevant for the analysis
A Methodology for the Impact Assessment of a g-Cloud Strategy … 21
of a g-Cloud strategy for the Italian PA. Interviews were conducted with fourteen
major experts in the Cloud Computing field for the Italian PA to explore the Cloud
possible adoption process and outcomes for the Italian Public Administration. The
open-ended interviews are one of the approaches used among researchers, and an
increasing number of researchers are using multi-methodology approaches to
achieve broader and often better results. Interviewing is currently undergoing not
only a methodological change but a much deeper one, related to self and other [10].
We have structured each interview on six open-ended questions. The experts have
been selected according to their experience and knowledge of national and inter-
national experiences of Cloud Computing services adoption both in private and
public sectors, so they can effectively provide for a real and correct analysis. The
interviews were conducted in different ways: on skype, face-to-face, by phone and
by e-mail. In this paragraph we will focus on the analysis of the results of the six
open-ended questions, presented in a single section. The experts were invited to
express an opinion in terms of assessment of the benefits and legal issues, mana-
gerial and operational impacts of Cloud strategies for the Italian PA. With regard to
22 F. Spagnoli et al.
4 Conclusions
References
1. European Commission: Communication from the commission to the European parliament, the
council, the European economic and social committee and the committee of the regions. In:
Unleashing the Potential of Cloud Computing in Europe. http://eur-lex.europa.eu/legal-
content/EN/TXT/?uri=CELEX:52012DC0529 (2011)
2. European Commission: Europe 2020: A European strategy for smart, sustainable, and
inclusive growth. http://ec.europa.eu/europe2020/europe-2020-in-a-nutshell/index_en.htm
(2010)
3. House, W., Kundra, V.: Federal cloud computing strategy. https://www.dhs.gov/sites/default/
files/publications/digital-strategy/federal-cloud-computing-strategy.pdf (2011)
4. Business Software Alliance: Global cloud computing scorecard, country report Italy. http://
portal.bsa.org/cloudscorecard2012/assets/pdfs/country_reports/Country_Report_Italy.pdf
(2012)
A Methodology for the Impact Assessment of a g-Cloud Strategy … 25
5. Corso, M., Mainetti, S., Piva, A.: La via del cloud per la rivoluzione digitale nella PA. www.
agendadigitale.eu (2012). Accessed 20 Nov 2012
6. UNDP Evaluation Office: Guidelines for outcome evaluators. http://web.undp.org/evaluation/
documents/HandBook/OC-guidelines/Guidelines-for-OutcomeEvaluators-2002.pdf (2002)
7. Boardman, A.E.: Cost-Benefit Analysis: concept and practice, 3rd edn. Pearson Prentice Hall,
Upper Saddle River (2006)
8. Brent, R.J.: Applied cost-benefit analysis, 2nd edn. Edward Elgar, Cheltenham (2007)
9. Köksalan, M., Wallenius, J., Zionts, S.: Multiple criteria decision making: from early history
to the 21st century. World Scientific, Singapore (2011)
10. Fontana, A., Frey, J.: Interviewing: The art of science. In: Denzin, N.K., Lincoln, Y.S. (Eds.)
Handbook of Qualitative Research, pp. 361–376. Sage Publications, Thousand Oaks (1994)
11. Digit, P.A.: Raccomandazioni e proposte sull’utilizzo del cloud computing nella pubblica
amministrazione. http://archivio.digitpa.gov.it/sites/default/files/forumPA2012/Raccomandazioni
Cloud.pdf (2012)
Italy’s One-Stop Shop: A Case
of the Emperor’s New Clothes?
1 Introduction
Governments are introducing new e-government services every day and bench-
marking is an important mechanism for keeping track of developments [1]. Italy’s
international ranking on the high-income countries’ e-government ladder has never
left the lower rungs in past years, but this trend was inverted in 2010–2011 when
W. Castelnovo (&)
University of Insubria, Varese, Italy
e-mail: walter.castelnovo@uninsubria.it
M. Sorrentino
University of Milan, Milano, Italy
e-mail: maddalena.sorrentino@unimi.it
M. De Marco
International Telematic University UNINETTUNO, Rome, Italy
e-mail: marco.demarco@uninettunouniversity.net
Italy was promoted by both the UN [2] and the EU, with the latter ranking it
significantly higher in its online services scorecard [3]. That advance was partly
thanks to the virtualization—to comply with Presidential Decree 160 of 2010—of
the One-Stop Business Shops (Sportelli Unici per le Attività Produttive or ‘SUAP’).
In fact, in tandem with the launch of other web-based services, the SUAP was
pivotal to the government’s policies for administrative simplification and to cut the
excessive red tape imposed on businesses, especially the small, medium and micro
enterprises (SMMEs). But while both the OECD and the EU recognized that Italy
had managed to reduce the administrative burdens on businesses and improve the
quality of regulation, which they considered essential to the country’s competitive
growth, and despite the fact that the virtualization of the SUAPs signalled the end of
a process of change that called for streamlining the PA’s relations with businesses
that began in 1998, the competitiveness indicators used to measure how simple it is
to set up and operate a business in Italy and the satisfaction of Italian SMMEs with
the PA’s delivery of services suffered a further decline.
To understand the reasons for this apparent paradox, this qualitative research
uses the Italian government’s attempt to introduce the One-Stop Business Services
and Information Shop as its case study. The aim of the paper is to answer the
following research question:
• Why is it so difficult to deliver the One-Stop Business Shop promise? That is, the
promise that citizens can get all the services they need under one physical or
virtual roof [4].
Bringing together three research strands, i.e., e-Government, Information
Systems (IS), and Public Management (PM) studies, the paper attempts to shed
light on the both the mechanisms that regulate the functioning of the One-Stop
Business Shop in Italy and the factors that influence its development. To address the
research question, the paper shows how the entire SUAP-centred simplification
process has suffered from legislative overkill, while the actual implementation
processes and the impact of the new laws on the behaviour of the actors and the
decision makers at the different levels of Italy’s PA have been ignored. The failure
to take account of the organizational aspects has, in turn, prevented a robust
evaluation of e-Government initiatives [5].
The remainder of the paper is organized as follows. After a brief review of the
literature on one-stop government and a description of the approach taken in this
article, Sect. 4 will analyse the implementation status of the One-Stop Business
Shop (or, to use the Italian acronym, SUAP), retracing how the programme was
developed in legislative ‘fits and starts’ from 1998 to 2010. Sections 5 and 6 use
secondary data sources to investigate the seeming contradiction between Italy’s
promotion in the international rankings and the fact that the SUAP laws have done
nothing to either raise the country’s economic competitiveness or reduce SMME
administrative burdens. In addition, the paper pinpoints and discusses several
problems that continue to prevent Italy’s One-Stop Shops from becoming fully
operational that even the latest legislative measures have failed to remedy. The
paper closes with Sect. 7, in which the finger of blame for many of the critical
Italy’s One-Stop Shop: A Case of the Emperor’s New Clothes? 29
aspects that prevent the SUAP programme from generating the expected benefits
can be pointed at the “innovation through legislation” approach, the same approach
that has stymied many of Italy’s PA reform programmes in the past 20 years [6].
2 Related Work
Regardless of its many guises and methods of implementation, the idea of one-stop
government is mainly grounded on the bundling and/or integration of public ser-
vices that can be accessed from a single point of contact; the re-design of the
services architecture and the service delivery from a citizen-centred viewpoint; and
the availability of multiple delivery channels, including the online channel.
Over the years, the scramble to define ‘one-stop government’ has been led by the
supranational organizations (i.e., the UN, OECD, World Bank, EU) and the large
consulting firms. Meantime, one-stop government has captured the attention of the
e-Government academic community, which sees it as pivotal to each e-Government
system [7–12].
More recently, the topic of one-stop government was taken up by IS scholars,
above all, to delve into specific conceptual aspects, especially:
• public agency interoperability/integration to support the execution of
inter-organizational workflows, as required by the single-point-of-contact idea
itself [1, 13–18];
• the study of inter-organizational transformation/innovation processes and
reengineering process models from the perspective of inter-organizational
cooperation between different public agencies [19–25];
• the study of business and service delivery models, particularly in terms of the
single point of contact’s delivery of online services [26–29].
One-stop government and the integration of citizen services has been amply
debated in the PM literature since the 1990s [30–32], especially from the
client/user-based perspective [33–36]. More recently, significant interest was
revived by the public administration reform discourse of the post-New Public
Management (NPM) era [37–43], which, in particular, sees the one-stop government
model as an example of politically driven centralization to rectify problems of
service delivery coordination by vertically reintegrating devolved and outsourced
service delivery functions into new centrally controlled service agencies [4].
The three research fields of e-Government, IS and PM have investigated the
one-stop government issue by zooming in on the various aspects that pertain to their
discipline, even though the concept of one-stop government is multidimensional,
traversing, as it does, several domains: from governance and inter-organizational
cooperation to the reengineering of business processes and ICT-based organizational
transformation. This multifaceted issue therefore calls for an interdisciplinary
approach to the study of one-stop government models, which, as far as the authors
can ascertain, the literature has not yet developed. The paper aims to narrow that gap.
30 W. Castelnovo et al.
3 Research Strategy
The empirical study of organizational change and transformation requires that the
analysis of the content and process of change should not be abstracted from the
context that gives that change form, meaning and dynamic [44]. As a result, to
interpret the true state of play it is necessary to take a dual approach that marries
attention to agency with the recognition that organizations are contextually
embedded phenomena with ‘deep structures’ that are frequently reproduced [44].
To shed some light on what is happening in Italy’s One-Stop Business Shop
domain and why the endeavour has produced disappointing results to date, the
article assumes that the process to implement the PA’s complex reform programme,
which involves various constituencies, has been strongly conditioned by the tension
between the typically ideal model (i.e., the online One-Stop Shop [11]) and the
constraints imposed by the structural and cultural features of the national admin-
istrative system. Hence, institutions are assigned a focal analytic position as an
explanatory variable of the observed outcomes [44, 45].
A complete evaluation effort would have meant conducting an in-depth and
rigorous analysis in terms of scope and methods: “evaluations of comprehensive
reforms are likely to require both quantitative and qualitative evidence” [3, 46].
Therefore, in line with the explorative design adopted, the selected evidence used
here focuses on specific features of the reform package, its temporal evolution and
the perceptions of just one category of stakeholders, i.e., the Italian SMMEs. The
evidence includes some authoritative secondary sources of information that sys-
tematically photograph the country’s SMME system and its business relations with
the bureaucratic machine. A historical data set is used to make a diachronic
interpretation of the phenomena in question.
The next sections document how Italy’s One-Stop Business Shop programme
has veered off the “ideal path” charted by the model proposed by Hogrebe et al.
[11].
The aim of the initial model used to implement the One-stop Business Shop (in
Italian, Sportello Unico per le Attività Produttive—SUAP) was to simplify the
Italian PA’s business authorization process [47, 48]. Law 447/1998 was the first
attempt to introduce the SUAP and called for each municipality to set up a one-stop
business services and information shop, either independently or through
inter-municipal cooperation. To streamline the business authorization procedures
and to give the entrepreneurs a single point of contact for expediting the require-
ments for the start-up, change of activity or closure of a business, the SUAP was
tasked with coordinating all the public agencies involved in the box-ticking process
(e.g., local healthcare authorities, fire brigade, provincial and regional governments,
Italy’s One-Stop Shop: A Case of the Emperor’s New Clothes? 31
regional environment authorities and other local agencies). However, the initiative
immediately came up against hurdles that prevented it from achieving its goal to
simplify and reduce the bureaucratic burden on businesses. This triggered a spate of
legislative interventions to raze those barriers.
The objective of Law 340/2000, which deregulated and abrogated specific laws
on matters that now came under the jurisdiction of the SUAP, was to compress the
business authorization timeframe. Law 229/2003 introduced the standard practice of
“tacit consent” or what is called the ‘Statement of Business Start-up’ (in Italian ‘DIA’
or ‘Denuncia d’Inizio Attività’), which Law 122/2010 then replaced with the
Certified Notification of Business Start-up (in Italian SCIA or Segnalazione
Certificata di Inizio Attività). These two interventions eliminated the lengthy wait for
authorizations, permits or licenses by introducing a system that enabled the business
owner to commence activity right away on submission of the DIA or SCIA.
The regulatory framework was further pruned in 2007, when Law 40 introduced
the Single Statement (‘Comunicazione Unica’ or ‘CU’) to enable a new company to
be set up in just one day. Moreover, Law 40/2007 made electronic transmission
mandatory for both company listing in the Register of Companies and for the
exchange of information and documents between the relevant public agencies.
Although this had an indirect impact on the SUAP, it played a major role in
embedding the principle that the PA and business should interact and communicate
using exclusively electronic means. That principle was fully incorporated into the
SUAP framework by Law 133/2008 to further refine the SUAP model by man-
dating both the online delivery of the full range of business services/information
and the electronic transmission of business applications. Law 133/2008 also
allowed for the reception of European Directive 2006/123/EC (the “Services
Directive”), which led to the enactment of Law 160 in 2010 and the launch of the
website www.impresainungiorno.gov as the national Single Point of Contact
(SPC) to give business users online access to information and to enable them to
complete their administrative procedures online.
Law 160/2010 was the last in the SUAP series and was enacted in 2010 to
impose SPC accreditation. This law set out the basic technological requirements
that the SUAPs had to comply with to qualify as full-fledged operators and gave
them a deadline of 1 January 2011. Law 160/2010 also forestalled any further
delays in the government’s SUAP mission by mandating that municipalities unable
to satisfy these requirements must delegate the running of the One-Stop Business
Shop to the local Chamber of Commerce, thus overriding the previous requirement
for the parties to enter a formal voluntary agreement.
Spurred by Law 160/2010, approximately 94.5 % of Italian municipalities had a
SUAP up and running in one of the three prescribed forms by June 2013, i.e., as a
directly managed municipal One-Stop Business Shop; as an inter-municipal
cooperation effort; or fronted by the local Chamber of Commerce. Decisive impetus
came from two of the law’s provisions: the obligation for the municipal SUAPs to
obtain national SPC accreditation; and the automatic transfer of the management of
the SUAP to the local Chamber of Commerce should the municipality fail to
comply with the 1 January 2011 deadline.
32 W. Castelnovo et al.
The current SUAP landscape thus offers two vistas [49, p. 164] on the one side
there are the Chamber of Commerce SUAPs, which give the service levels (the
same for the whole of Italy) and the operational levels (how many and which
electronic practices are managed by local area, economic activity, type of practice,
etc.) and, on the other, the SPC-accredited municipalities, which differ significantly
on both counts given that each player adopts different technical and organizational
solutions.
In fact, despite the clearly defined basic requirements, the SPC-accredited
municipalities have equipped their front- and back-end functions with the ICT
solutions deemed most appropriate for their particular organizational structure.
Clearly, this has created nationwide divergences in organizational geometry and the
use of non-standardized forms to comply with the same requirement.
The legislative trail left by the SUAP since 1998 was necessary to both introduce
further regulatory and procedural simplification and to set the SUAP on a more
technology-driven course, the idea being to ultimately transform it into a virtual
service centre that delivers information and services to business users via the new
digital technologies, the internet and the new media.
Italian Law 160/2010 was the catalyst need to turn the SUAP into a fully
connected One-Stop Business Shop that uses exclusively ICT to deal with business
applications, statements, reports and communications.
The online One-Stop Business Shop can be considered an advanced
e-Government service to all effects and purposes and, hence, a basic pillar of Italy’s
digitization policies that aim to implement the directives issued by the supranational
EU. In fact, by the end of 2010 the European Commission’s DG Information
Society’s annual e-Government benchmark [50] had promoted Italy in its European
ranking of online business services. In particular, the full online availability of the
Italian online business services surveyed by the report spurred the country to pole
position with 100 % availability versus 88 % in 2009. Moreover, Italy’s online
business services sophistication indicator (according to the parameters of the
European Commission’s 5‐stage maturity model: (i) information, (ii) one‐way
interaction, (iii) two‐way interaction, (iv) transaction, and (v) targetisation/automa-
tion, rose from 86 % in 2009 to 99 % in 2010. So it would seem that Law 160/2010
has effectively produced a positive result, at least for what concerns Italy’s drive to
establish e-Government.
However, those bright results were marred by the further decline of Italy’s
competitiveness indicators, i.e., those used to measure how simple it is to set up and
operate a business in Italy and the level of PA service delivery satisfaction of the
country’s Small, Medium and Micro-sized Enterprises (SMMEs), which make up
more than 90 % of Italy’s business landscape.
Italy’s One-Stop Shop: A Case of the Emperor’s New Clothes? 33
6 Discussion
The One-Stop e-Government reference framework shown in Fig. 1 [11] can be used
to map the ideal ‘path’ to the status of full-fledged virtual service centre.
By making it mandatory for the municipalities to set up a SUAP, Law 447/1998
created the conditions for the transition from Administrative Organization
(AO) (Quadrant 1) to Service Center (SC) (Quadrant 2), while Law 133/2008 and
Law 160/2010 enabled its transition to Virtual Service Center (VSC) (Quadrant 4).
Nevertheless, as shown in Sect. 5, above, the efficacy of the SUAP programme
remains negligible if not zero, which raises the question of how to identify how and
why Italy’s effective route to a VSC has veered off the “ideal path” described by
Hogrebe, Kruse and Nüttgens [11].
It is only possible to make the transition from AO to SC by taking a user-centred
approach to the bundling of services and the simplification of procedures. In a
highly fragmented administrative context such as the Italian PA [53] it is necessary
to closely integrate/coordinate the bundling of services at the intra-organizational
level, i.e., all the offices involved in the delivery of a service, and at the
inter-organizational level, i.e., all the local agencies involved in the business
34 W. Castelnovo et al.
The administrative simplification issue has again come to the fore of Italy’s cultural
and political debate, mostly as an effect of the ongoing global financial crisis.
Companies are subject to extreme regulation and unrelenting controls that weigh
heavily on their costs and, thus, their income statements, stopping them from
investing in strategic and growth initiatives. The effects on the economy of better
regulation policies are like those that effectively reduce the fiscal drag but do not
create the same headache of sustainable public finance. That said, the desired result
is equally dependent on the quality of the simplification policies.
The article has mapped the journey of Italy’s municipal One-Stop Business
Services and Information Shops. The findings have built on the relevant literature to
demonstrate how the hurdles continue to thwart the delivery of services from under
one physical or virtual roof, despite the fact that well over a decade has passed since
the first law was enacted.
The One-Stop Shop can only succeed if it is built on the solid foundation of the
PA’s capacity to cultivate a culture of internal and inter-institutional cooperation
with all the external public agencies and offices involved and, thus, to guarantee the
user simpler administrative procedures, timely decisions and the ability to manage
the ‘checks and box-ticking’ side. This could easily have been done with a bit of
forward-thinking on the various coordination actions to put in place, including a
36 W. Castelnovo et al.
review and reengineering of the tasks carried out by each administrative branch and
government level, a review of the information systems, a redesign of the proce-
dures, and a rethinking of the methods used to connect and interact with the private
sector.
Paradoxically, but hardly surprisingly considering the approach to change
management that predominates in Italy [47, 54] the heftiest chunk of the funding
needed to drive change has been poured into producing legislation, with only a
small part invested in the other areas, such as the governance of the simplification
effort led by the SUAP, i.e., the system of coordination and control of the
inter-organizational processes. In other words, despite the highly fragmented,
grid-locked system, the focus was not on the deep causal roots of the problem but
on the more easy-to-tackle superficial aspects [55]. In fact, changing a sector’s
regulatory framework is the easiest part, whereas it is common knowledge that
digital government projects and initiatives are complex endeavours [56].
The authors are not denying the importance of the legislator’s role in change
management and, in fact, believe that legislation is the bedrock of change. No, what
they are saying is that legislation is only one side of the coin, and that it takes more
than just issuing laws to ensure the actual implementation of the desired change.
Which brings us to the question: What is the difference between regulatory change
and organizational change? Well, the first can be planned and is fairly immediate,
while the second can only be partially planned and, most of all, is often a long and
winding road [54, 57]. Basically, the crux of the government-One-Stop Shop issue
is its implementation, i.e., the strategies to pursue and the levers to press in order to
prime the system to make a significant change in its relational approach to SMMEs.
The evidence examined highlights a sometimes tortuous unravelling of decisions
and objectives, which partly changed along the way to accommodate the reactions
of the various stakeholders. It also reveals the constraints and opportunities that
emerged during implementation [58].
From the theoretical standpoint, the study confirms, first, the usefulness of the
framework developed by Hogrebe, Kruse and Nüttgens [11] for interpreting and
comparing the Italian scenario with the four ideal scenarios found in the extensive
international literature. Second, it helps to increase the body of common knowledge
on public organisations and their dealings with the environments with which they
relate by shedding light on the processes associated with the delivery of One-Stop
Shops.
Support for the reflections developed here should be considered only tentative,
given the exploratory nature of this research. In essence, the route taken confirms
that the assessment of public reforms is worth exploring by the academic com-
munity of organization studies.
The paper is not without limitations. First, the fact that the evidence comes
entirely from Italy, which means that caution should be exercised before the
arguments presented here are generalized to other countries or contexts. A second
limitation is the article’s macro perspective, which does not document the virtuous
situations of the many municipalities that have fully complied with the law and set
up a virtual SUAP.
Italy’s One-Stop Shop: A Case of the Emperor’s New Clothes? 37
References
1. Bekkers, V.: The governance of back-office integration. Public Manag. Rev. 9, 377–400
(2007)
2. United Nations: E-Government Survey 2010. Department of Economic and Social Affairs,
New York (2010)
3. OECD: Italy: Reviving Growth and Productivity. OECD, Paris (2012)
4. Howard, C.: Rethinking Post-NPM Governance: The Bureaucratic struggle to implement
one-stop-shopping for government services in Alberta. Public Organ. Rev. 1–18 (2014)
5. Irani, Z., Love, P.E.D., Elliman, T., Jones, S., Themistocleous, M.: Evaluating e-government:
learning from the experiences of two UK local authorities. Inf. Syst. J. 15, 61–82 (2005)
6. Suppa, A., Zardini, A.: The Implementation of a performance management system in the
Italian army. In: Zhou, M. (ed.) Education and Management, Communications in Computer
and Information Science, vol. 210, pp. 139–146. Springer, New York (2011)
7. Wimmer, M.A.: A European perspective towards online one-stop government: the eGOV
project. Electron. Commer. Res. Appl. 1, 92–103 (2002)
8. Glassey, O.: Developing a one-stop government data model. Gov. Inf. Q. 21, 156–169 (2004)
9. Bannister, F.: E-government and administrative power: the one-stop-shop meets the turf war.
Electron. Gov., Int. J. 2, 160–176 (2005)
10. Gouscos, D., Kalikakis, M., Legal, M., Papadopoulou, S.: A general model of performance
and quality for one-stop e-Government service offerings. Gov. Inf. Q. 24, 860–885 (2007)
11. Hogrebe, F., Kruse, W., Nüttgens, M.: One-stop e-Government for small and medium-sized
enterprises (SME): A strategic approach and case study to implement the EU services
directive. Bled 2008 Conference. Bled, Slovenia (2008)
12. Dameri, R.P.: Defining an evaluation framework for digital city implementation. In: The
International Conference on Information Society (i-Society). London (2012)
13. Charih, M., Robert, J.: Government on-line in the federal government of Canada: The
organizational issues. Int. Rev. Admin. Sci. 70, 373–384 (2004)
14. West, D.M.: e-Government and the transformation of service delivery and citizen attitudes.
Publ. Adm. Rev. 64, 15–27 (2004)
15. Guijarro, L.: Interoperability frameworks and enterprise architectures in e-government
initiatives in Europe and the United States. Gov. Inf. Q. 24, 89–101 (2007)
16. Colarullo, F., Di Mascio, R., Virili, F.: Meccanismi di coordinamento nei SUAP (Sportelli
Unici per le Attività Produttive): il caso Enterprise. VII Workshop dei Docenti e dei
Ricercatori di Organizzazione Aziendale, Salerno (2006)
17. Vaast, E., Binz-Scharf, M.C.: Bringing change in government organizations: evolution
towards post-bureaucracy with web-based IT projects. In: International Conference on
Information Systems (ICIS). Paris (2008)
18. Spagnoletti, P., Za, S.: A design theory for e-Service Environments: The interoperability
challenge. In: Snene, M. (ed.) IESS 2012. Springer, New York (2012)
19. Ongaro, E.: Process management in the public sector: The experience of one-stop shops in
Italy. Int. J. Publ. Sect. Manag. 17, 81–107 (2004)
20. Kraemer, K., King, J.L.: Information technology and administrative reform: will e-government
be different? Int. J. Electron. Gov. Res. 2, 1–20 (2006)
21. Mele, V.: Explaining programmes for change: Electronic government policy in Italy
(1993-2003). Publ. Manag. Rev. 10, 21–49 (2008)
22. Leeuw, F.L., Leeuw, B.: Cyber society and digital policies: Challenges to evaluation?
Evaluation 18, 111–127 (2012)
23. Hansson, F., Norn, M.T., Vad, T.B.: Modernize the public sector through innovation? A
challenge for the role of applied social science and evaluation. Evaluation 20, 244–260 (2014)
24. Ricciardi, F., Rossignoli, C., De Marco, M.: Participatory networks for place safety and
livability: organisational success factors. Int. J. Networking Virtual Organ 13, 42–65 (2013)
38 W. Castelnovo et al.
25. Spagnoletti, P., Federici, T.: Exploring the Interplay between FLOSS adoption and
organizational innovation. Commun. Assoc. Inf. Syst. 29, 279–298 (2011)
26. Janssen, M., Kuk, G., Wagenaar, R.W.: A survey of Web-based business models for
e-government in the Netherlands. Gov. Inf. Q. 25, 202–220 (2008)
27. Kohlborn, T., Weiss, S., Poeppelbuss, J., Korthaus, A., Fielt, E.: Online service delivery
models—an international comparison in the public sector. In: Proceedings of the 21st
Australasian Conference on Information Systems (ACIS). Brisbane, Australia (2010)
28. Peters, C., Kohlborn, T., Korthaus, A., Fielt, E., Ramsden, A.: Service delivery in one-stop
government portals–observations based on a market research study in Queensland. In:
Proceedings of the 22nd Australasian Conference on Information Systems (ACIS). Brisbane,
Australia (2011)
29. Braccini, A.M., Spagnoletti, P.: Defining cooperative business models for inter-organizational
cooperation. Int. J. Electron. Commer. Stud. 3, 229–249 (2012)
30. Agranoff, R.: Human services integration: Past and present challenges in public
administration. Publ. Adm. Rev. 51, 533–542 (1991)
31. Milward, H.B., Provan, K.G.: Governing the hollow state. J. Publ. Adm. Res. Theor. 10, 359–
379 (2000)
32. Ho, A.T.K.: Reinventing local governments and the e-Government initiative. Publ. Adm. Rev.
62, 434–444 (2002)
33. Bellamy, C.: Transforming social security benefits administration for the twenty-first century:
Towards one-stop services and the client group principle? Publ. Adm. 74, 159–179 (1996)
34. Peters, B.G.: Managing horizontal government: The politics of co-ordination. Publ. Adm. 76,
295–311 (1998)
35. Wilkins, P.: Accountability and Joined-up government. Aust. J. Publ. Adm. 61, 114–119
(2002)
36. Pollitt, C.: Joined-up government: A survey. Polit. Stud. Rev. 1, 34–49 (2003)
37. Pollitt, C., Bouckaert, G.: Public management reform. In: A Comparative Analysis: New
Public Management, Governance, and the Neo-Weberian State, 3 ed. Oxford University Press,
Oxford (2011)
38. Dunleavy, P., Margetts, H., Bastow, S., Tinkler, J.: New public management is dead. Long
live digital-era governance. J. Publ. Adm. Res. Theor. 16, 467–494 (2006)
39. Christensen, T., Lægreid, P.: Complexity and hybrid public administration -theoretical and
empirical challenges. Publ. Organ. Rev. 11, 1–17 (2010)
40. Christensen, T., Lægreid, P.: The whole-of-government approach to public sector reform.
Publ. Adm. Rev. 67, 1059–1066 (2007)
41. Howard, C., Langford, J.: The service state: Rhetoric, reality and promise, vol. 25. University
of Ottawa Press, Ottawa (2010)
42. Bouckaert, G., Peters, B.G., Verhoest, K.: The coordination of public sector organizations:
Shifting patterns of public management. Palgrave Macmillan, Basingstoke (2010)
43. Christensen, T.: Post-NPM and changing public governance. Meiji J. Polit. Sci. Econ. 1, 1–11
(2012)
44. McNulty, T., Ferlie, E.: Process transformation: Limitations to radical organizational change
within public service organizations. Organ. Stud. 25, 1389–1412 (2004)
45. Kuhlmann, S., Wollmann, H.: Introduction to Comparative Public Administration:
Administrative Systems and Reforms in Europe. Edward Elgar, Cheltenham (2014)
46. Yin, R.K., Davis, D.: Adding new dimensions to case study evaluations: The case of
evaluating comprehensive reforms. New Dir. Eval. 113, 75–94 (2006)
47. Zardini, A., Rossignoli, C., Mola, L., De Marco, M.: Developing municipal e-Government in
Italy: The city alfa case. In: Fifth International Conference on Exploring Services Science
(IESS 2014). Geneva (2014)
48. Caporarello, L., Viachka, A.: Individual readiness for change in the context of enterprise
resource planning system implementation. In: D’Atri, A., De Marco, M., Braccini, A.M.,
Cabiddu, F. (eds.) Management of the Interconnected World, pp. 89–96. Springer, New York
(2010)
Italy’s One-Stop Shop: A Case of the Emperor’s New Clothes? 39
49. Mattarella, B.G., Natalini, A. (eds.): La regolazione intelligente. Un bilancio critico delle
liberalizzazioni italiane. Passigli Editore, Bagno a Ripoli (2013)
50. Capgemini IDC, Europe, R., Sogeti, DTi: Digitizing public services in Europe: Putting
ambition into action: 9th benchmark measurement. Technical Report by European
Commission (2010)
51. Castelnovo, W.: A country level evaluation of the impact of the e-government: the case of
Italy. In: Gil-Garcia, J.R. (ed.) E-Government success factors and measures: concepts,
theories, esperiences, and practical recommendations. IGI Global, Hershey (2013)
52. PromoPA: Imprese e burocrazia. Come le piccole e micro imprese giudicano la Pubblica
amministrazione. Franco Angeli, Milano (2013)
53. Mola, L., Carugati, A.: Escaping ‘localisms’ in IT sourcing: tracing changes in institutional
logics in an Italian firm. Eur. J. Inf. Syst. 21, 388–403 (2010)
54. Sorrentino, M., De Marco, M.: Implementing e-government in hard times: When the past is
wildly at variance with the future. Inf. Polity 18, 331–342 (2013)
55. Battistelli, F.: Managerializzazione e retorica. In: Battistelli, F. (ed.): La cultura delle
amministrazioni pubbliche fra retorica e innovazione, pp. 23–45. Franco Angeli, Milano
(2002)
56. Luna-Reyes, L.F., Melloulib, S., Bertot, J.C.: Key factors and processes for digital government
success. Inf. Polity 18, 101–105 (2013)
57. Pennarola, F., Caporarello, L.: Enhanced class replay: Will this turn into better learning? In:
Wankel, C., Blessinger, P. (eds.) Increasing Student Engagement and Retention Using
Classroom Technologies: Classroom Response Systems and Mediated Discourse
Technologies, pp. 143–162. Emerald Group Publishing, Bingley (2013)
58. Sorrentino, M., Passerini, K.: Evaluating e–government initiatives: the role of formative
assessment during implementation. Electron. Gov. 9, 28–141 (2012)
The Determinants of IT Adoption
by SMEs: An Agenda for Research
Riccardo Spinelli
1 Introduction
R. Spinelli (&)
Department of Economics and Business Studies, University of Genoa, Genoa, Italy
e-mail: riccardo.spinelli@economia.unige.it
consistent with each other. In this paper we aim to encompass these factors into a
holistic framework, to explore areas which are well established and equally to
highlight areas which are underdeveloped or ignored in literature and which could
provide directions for future research.
In the next section, drivers and inhibitors are reported, organized into a
multi-dimensional framework. Then a discussion section follows, where reflections
about possible research developments are proposed. Finally, some conclusions
recap the main results of the study.
The drivers and inhibitors (hence ‘factors’) discussed in this paper emerge from a
broad analysis of a wide set of papers which address the issue of IT adoption in
SMEs. With respect to the body of literature considered, some limitations must be
noted. First, we only paid attention to factors whose relevance is corroborated by
empirical analysis. Second, IT is variably defined—in a wider or narrower way—by
different authors: we decided, as in [1], to adopt an inclusive approach in defining
IT as including Internet-based solutions (e-business, e-commerce, etc.), functional
(CAD, CAM, etc.) and integrated (EDI, ERP, CRM, etc.) applications, together
with hardware, software and communication devices. Third, we approached drivers
and inhibitors from a neutral point of view—calling them “factors” and avoiding an
a priori classification; as reported by [4], this can help address inconsistent results
regarding a given factor [5], subject to variability in the specific setting of each
research in terms of data collection methodology, country, type of firms, intervie-
wee, IT development level etc. Finally, the terminological discrepancy among
different authors has been overcome by gathering together factors which were
differently named in spite of a common meaning.
Several alternative solutions have been proposed in literature to organize the set
of adoption factors [1–4, 6–8]. We opted for an adapted TOE [9] framework,
which, in our view, allows us to keep the focus on the firm as the unit of analysis
and includes both internal and external determinants. Other widely accepted models
(such as the TAM/TPB [10, 11] or the UTAUT [12]) pay far greater attention to the
individual/user level of analysis, with a consequently narrower focus. Nevertheless,
we do not fully discard the user-based approach: we partially encompass it within
the organizational environment of the TOE framework, by paying specific attention
to the characteristics of the decision makers (SME’s owner manager or top man-
agement). Indeed, the role they directly play in IT decisions is greater in SMEs than
in larger firms [2]; in major companies, the IT function is usually more structured
and formalized within the organization and the impact of individuals (even of the
top hierarchical level) is far more mediated by organizational structures, formalized
routines and procedures [13].
The Determinants of IT Adoption by SMEs … 43
papers find a positive correlation between size and IT adoption (see among others
[17, 59, 64–66]); nevertheless other studies return a not statistically significant
relationship [18, 38].
As of the industry of activity, Porter and Millar [67] found a significant vari-
ability in the importance and role of IT, which is confirmed in our analysis even if
the literature does not always return consistent results. According to [68], British
SMEs in high-tech and knowledge intensive industries show higher IT adoption
rates than other manufacturing or service firms, which on the contrary do not differ
significantly from each other, as in [59] too. Other contributions, such as [42],
contradict this point and find higher adoption among service firms than manufac-
turing. Overall, the correlation between IT adoption and industry remains
under-investigated and poorly verified.
As regards the firm’s strategic orientation, an aggressive growth- [13] and
innovation-oriented [65] strategy is another strong driver for IT adoption: in a
hostile and complex environment, most active SMEs react by entering new markets,
creating new product/market combinations and pursuing technological leadership
thanks to IT [69]. The above mentioned study by Raymond et al. [46], for instance,
finds a more intense adoption of e-business in innovative firms in terms of market,
product and technology. Wymer and Regan [5], instead, find that propensity to
innovation is one of the three most important drivers for e-commerce adoption, both
for already-adopters and would-be-adopters. An innovation-focused strategy, in
general, is often associated with past experiences of new technologies (not neces-
sarily IT) adoption and several study confirm that these experiences significantly
support IT adoption too [15, 38, 70]. Much less clear is the relationship between
competitive strategy and IT adoption: Bayo-Moriones and Lera-Lopez [66] report
contrasting results of several studies which seem to find evidence of a strategic
interest towards IT in both cost leadership and differentiation approaches.
With respect to the firm endowment of resources and competences [71], both [4]
and [19] note that the perception of its own resources and capabilities by the firm’s
management is a stronger driver for IT adoption than their absolute value: quite
often (see among others [19]) IT projects are abandoned due to a “perceived” and
prejudicial incompatibility with, for instance, the IT skills of the staff, without any
real test proving it. Among firm resources, available funds play an important role in
driving or inhibiting IT investment [5, 51]: SMEs usually fight against capital
shortage, which puts pressure on the investment selection process due to the
potential consequences on the firm’s overall financial stability of wrong or sub-
optimal investments [72]. IT investment, in particular, usually have medium to long
return periods [7] which tend to discourage the top management of SMEs. Human
resources are the other resources which most influence the IT adoption process.
A great importance, in fact, is given in literature to the firm’s staff—that is those
individuals who are asked to use the new systems—in shaping IT investment
policies. Igbaria et al. [73], for instance, relate better results in IT adoption with a
higher staff involvement, which in turn makes them feel part of the innovation
process and increases their motivation; moreover, human resources can be a major
source of suggestions about system improvements or the choice of the applications
46 R. Spinelli
3 Discussion
The literature review just presented offers several suggestions about those aspect of
the IT adoption by SMEs which seem to have already been widely investigated and
those which deserve more study.
First of all, past research seems to have been strongly influenced by an IT-based
approach, both in methodology and object of analysis. Many studies, in fact, have
adopted models grounded in the IT literature (TRA, TAM/TPB) and applied them
to the relationship between information technology and users in SMEs; this
explains the large number of studies which, for instance, apply regression or
structural equation modelling to find significant correlation between a set of tech-
nology- or user-related variables and the actual adoption of IT. These analysis are
certainly valuable and cast light on the adoption process at the individual level but,
in our opinion, may fail to give full explanation of IT adoption by SMEs when the
unit of analysis is the firm as a whole and not the single user. As a consequence, this
research stream may be less promising if the analyst aims to trace IT adoption back
to a wide set of implementation determinants, encompassing both technological,
organizational and environmental factors.
We are quite critical towards those studies as well which try to correlate strategic
orientation and IT adoption. In this case, we identify a conceptual issue which in
our opinion undermines the approach; in fact, it assumes that IT adoption is a
dependent variable influenced by independent variables which are measurable items
48 R. Spinelli
connoting a specific strategy: from our point of view, IT adoption is part and not a
consequence of any strategic choice and, as a consequence, a cause-effect corre-
lation analysis may not be appropriate.
On the contrary, an interesting area which in our view deserves attention lies
with the effect of industry-related factors. The analysis of the correlation between
industry and adoption returns vague results, but this may be due to a wrong
approach to the issue: in our opinion, the industry variable should not be assumed
as a direct input in the regression, but as a moderator variable. In other words, it is
not so relevant to find differences in the adoption rate according to the industry of
the firms; it could be more interesting to study how the sectorial environment
(eventually) changes the sign and extent of the influence of other technological,
organizational and environmental factors. We expect significant results from such a
study, which could also contribute to the creation of more tailored support programs
by public and private agencies for IT adoption in SMEs. Many SMEs remain in fact
dissatisfied with government business advice services as lacking in value and not
displaying an understanding of their specific needs [88, 89].
Finally and strictly connected with the abovementioned considerations, a field of
study we consider to be potentially fruitful concerns the results of the support
interventions to give impulse to IT adoption by SMEs. In fact, a pervasive skep-
ticism towards public support seems to emerge, due to the misalignment between
firms’ needs and implemented actions [72], often accused to be too generic and not
tailored enough on the specific requests [90]; a proper investigation of the actual
effects of those programs is consequently needed; the critical issue, in our opinion,
is trying to measure their effectiveness not only in terms of “quantity” of IT
adopted, but also in terms of actual effects on the performances and operational
routines of the firms which have benefited of the support.
4 Conclusions
In this paper we have tried to organize in an original framework the vast literature
which addresses the topic of the determinants of IT adoption by SMEs. The main
objective was to identify well established research areas and equally to highlight
areas which are underdeveloped or ignored in literature and which could provide
directions for future research.
Our results return a very composed set of factors which have an influence of IT
adoption, which can be traced back to three main areas: technological, organiza-
tional and institutional environment. As assumed, many of these factors have
already been widely explored in literature and offer limited perspectives for further
research. On the contrary, other areas—especially those related with industry-based
factors and support programs—seem to be more promising, in particular if
addressed in novel ways from a conceptual and methodological point of view.
The Determinants of IT Adoption by SMEs … 49
This final consideration represents, in our view, a stimulus for scholars who are
interested in the determinants of IT adoption by SMEs, as wide fields of study are
still waiting to be properly explored and could potentially lead to results important
for both researchers and practitioners.
References
1. Ghobakhloo, M., Sabouri, M.S., Hong, T.S., Zulkifli, N.: Information technology adoption in
small and medium-sized enterprises; an appraisal of two decades literature. Interdiscip. J. Res.
Bus. 1(7), 53–80 (2011)
2. Fillis, I., Johannson, U., Wagner, B.: Factors impacting on e-business adoption and
development in the smaller firm. Int. J. Entrep. Behav. Res. 10(3), 178–191 (2004)
3. Chitura, T., Mupemhi, S., Dube, T., Bolongkikit, J.: Barriers to electronic commerce adoption
in small and medium enterprises: a critical literature review. J. Internet Bank. Commer. 13(2),
1–14 (2008)
4. Alzougool, B., Kurnia, S.: Towards a better understanding of SMEs perception of electronic
commerce technology adoption. Interdiscip. J. Contemp. Res. Bus. 2(3), 9–37 (2010)
5. Wymer, S., Regan, E.: Factors influencing e-commerce adoption and use by small and
medium businesses. Electr. Mark. 15(4), 438–453 (2005)
6. Barba-Sánchez, V., Martínez-Ruiz, M., Jiménez-Zarco, A.-I.: Drivers, benefits and challenges
of ICT adoption by small and medium sized enterprises (SMEs): a literature review. Probl.
Perspect. Manag. 5(1), 103–114 (2007)
7. Nguyen, T.H.: Information technology adoption in SMEs: an integrated framework. Int.
J. Entrep. Behav. Res. 15(2), 162–186 (2009)
8. Awa, H.O., Nwibere, B.M., Inyang, B.J.: The uptake of electronic commerce by SMEs: a meta
theoretical framework expanding the determining constructs of TAM and TOE frameworks.
J. Global Bus. Technol. 6(1), 1–27 (2010)
9. Tornaztky, L.B., Fleisher, M.: The process of technological innovation. Lexington Books,
Lexington (1990)
10. Davis, F.D.: Perceived usefulness, perceived ease of use, and user acceptance of information
technology. MIS Q. 13(3), 319–340 (1989)
11. Ajzen, I.: The theory of planned behavior. Organ. Behav. Hum. Decis. Process. 50(2), 179–
211 (1991)
12. Venkatesh, V., Morris, M.G., Davis, G.B., Davis, F.D.: User acceptance of information
technology: toward a unified view. MIS Q. 27(3), 425–478 (2003)
13. Bruque, S., Moyano, J.: Organisational determinants of information technology adoption and
implementation in SMEs: the case of family and cooperative firms. Technovation 27(5), 241–
253 (2007)
14. Rogers, E.M.: Diffusion of innovations. The Free Press, New York (1983)
15. Merthens, J., Cragg, P.B., Mills, A.M.: A model of internet adoption by SMEs. Inf. Manag.
39, 165–176 (2001)
16. Al-Qirim, N.A.: E-commerce adoption in small businesses: cases from New Zealand. J. Inf.
Technol. Case Appl. Res. 9(2), 28–57 (2007)
17. Kannabiran, G.: Enablers and inhibitors of advanced information technologies adoption by
SMEs. An empirical study of auto ancillaries in India. J. Enterp. Inf. Manag. 25(2), 186–209
(2012)
18. Love, P.E.D., Irani, Z., Standing, C., Lin, C., Burn, J.M.: The enigma of evaluation: benefits,
costs and risks of IT in Australian small-medium-sized enterprises. Inf. Manag. 42(7), 947–
964 (2005)
50 R. Spinelli
19. Kuan, K.K.Y., Chau, P.Y.K.: A perception-based model for EDI adoption in small businesses
using a technology-organization-environment framework. Inf. Manag. 38, 507–521 (2001)
20. Fink, D.: Guidelines for the successful adoption of information technology in small and
medium enterprises. Int. J. Inf. Manag. 18(4), 243–253 (1998)
21. Chong, S., Pervan, G.: Factors influencing the extent of deployment of electronic commerce
for small-and medium-sized enterprises. J. Electr. Commerce Organ. 5(1), 1–29 (2007)
22. Tan, K.S., Chong, S.C., Lin, B., Eze, U.C.: Internet-based ICT adoption: evidence from
Malaysian SMEs. Ind. Manag. Data Syst. 109(2), 224–244 (2009)
23. Ifinedo, P.: Internet/E-business technologies acceptance in Canada’s SMEs: an exploratory
investigation. Internet Res. 21(3), 255–281 (2011)
24. Gibbs, J.L., Kraemer, K.L.: A cross-country investigation of the determinants of scope of
e-commerce use: an institutional approach. Electr. Mark. 12(2), 124–137 (2004)
25. Chiarvesio, M., Di Maria, E., Micelli, S.: From local networks of SMEs to virtual districts?
Evidence from recent trends in Italy. Res. Policy 33(10), 1509–1528 (2004)
26. Kartiwi, M., MacGregor, R.C.: Electronic commerce adoption barriers in small to
medium-sized enterprises (SMEs) in developed and developing countries: a cross-country
comparison. J. Electr. Commerce Organ. 5(3), 35–51 (2007)
27. Riemenschneider, C.K., Harrison, D.A., Mykytyn, P.P.: Understanding IT adoption decisions
in small business: integrating current theories. Inf. Manag. 40(4), 269–285 (2003)
28. Jeon, B.N., Han, K.S., Lee, M.J.: Determining factors for the adoption of ebusiness: the case
of SMEs in Korea. Appl. Econ. 38(16), 1905–1916 (2006)
29. Chatzoglou, P.D., Vraimaki, E., Diamantidis, A., Sarigiannidis, L.: Computer acceptance in
Greek SMEs. J. Small Bus. Enterp. Dev. 17(1), 78–101 (2010)
30. Chiliya, N., Chikandiwa, C.K., Afolabi, B.: Factors affecting small micro medium enterprises’:
(SMMEs) adoption of e-commerce in the Eastern Cape Province of South Africa. Int. J. Bus.
Manag. 6(10), 28–36 (2011)
31. Kendall, J.D., Tung, L.L., Chua, K.H., Ng, C.H.D., Tan, S.M.: Receptivity of Singapore’s
SME to electronic commerce adoption. J. Strateg. Inf. Syst. 10(3), 223–242 (2001)
32. Matthews, P.: ICT assimilation and SME expansion. J. Int. Dev. 19, 817–827 (2007)
33. Thulani, D., Tofara, C., Langton, R.: Electronic commerce benefits and adoption barriers in
small and medium enterprises in Gweru, Zimbabwe. J. Internet Bank. Commerce 15(1), 1–17
(2010)
34. Olatokun, W., Kebonye, M.: E-commerce technology adoption by SMEs in Botswana. Int.
J. Emerg. Technol. Soc. 8(1), 42–56 (2010)
35. Cohen, S., Kallirroi, G.: E-commerce investments from an SME perspective: costs, benefits
and processes. Electr. J. Inf. Syst. Eval. 9(2), 45–56 (2006)
36. Love, P.E.D., Irani, Z.: An exploratory study of information technology evaluation and
benefits management practices of SMEs in the construction industry. Inf. Manag. 42(1), 227–
242 (2004)
37. Van Akkeren, J., Cavaye, A.: Factors affecting entry-level internet technology adoption by
small firms in Australia. J. Syst. Inf. Technol. 3(2), 33–47 (2000)
38. Dholakia, R.R., Kshetri, N.: Factors impacting the adoption of the internet among SMEs.
Small Bus. Econ. 23, 311–322 (2004)
39. Levy, M., Powell, P.: Strategies for growth in SMEs. The role of information and information
systems. Elsevier, Oxford (2005)
40. Hunter, K., Kemp, S.: The personality of e-commerce investors. J. Econ. Psychol. 25(4), 529–
537 (2004)
41. Chuang, T.-T., Nakatani, K., Zhou, D.: An exploratory study of the extent of information
technology adoption in SMEs: an application of upper Echelon theory. J. Enterp. Inf. Manag.
22(1/2), 183–196 (2009)
42. Hua, S.C., Rajesh, M.J., Theng, L.B.: Determinants of e-commerce adoption among small and
medium-sized enterprises in Malaysia. In: Thomas, B., Simmons, G. (eds.) E-commerce
adoption and small business in the global marketplace: tools for optimization, pp. 67–85.
Business Science Reference, Hershey (2010)
The Determinants of IT Adoption by SMEs … 51
43. Foley, P., Ram, M.: The use of online technology by ethnic minority businesses: a
comparative study of the west midlands and UK. De Montfort University, Leicester (2002)
44. Beckinsale, M., Ram, M., Thedorakopoulos, N.: ICT adoption and ebusiness development:
understanding ICT adoption amongst ethnic minority business. Int. Small Bus. J. 29(3), 193–
219 (2011)
45. Middleton, K.L., Byus, K.: Information and communications technology adoption and use in
small and medium businesses. The influence of Hispanic ethnicity. Manag. Res. Rev. 34(1),
98–110 (2011)
46. Raymond, L., Bergeron, F., Blili, S.: The assimilation of e-business in manufacturing SMEs:
determinants and effects on growth and internationalization. Electr. Mark. 15(2), 106–118
(2005)
47. Wilson, H., Daniel, E., Davies, I.A.: The diffusion of e-commerce in UK SMEs. J. Mark.
Manag. 24(5–6), 489–516 (2008)
48. Scupola, A.: SMEs’ e-commerce adoption: perspectives from Denmark and Australia.
J. Enterp. Inf. Manag. 22(1/2), 152–166 (2009)
49. Chao, C.-A., Chandra, A.: Impact of owner’s knowledge of information technology: (IT) on
strategic alignment and IT adoption in US small firms. J. Small Bus. Enterp. Dev. 19(1), 114–
131 (2012)
50. Caldeira, M.M., Ward, J.M.: Using resource-based theory to interpret the successful adoption
and use of information systems and technology in manufacturing small and medium-sized
enterprises. Eur. J. Inf. Syst. 12(2), 127–141 (2003)
51. Elahi, S., Hassanzadeh, A.: A framework for evaluating electronic commerce adoption in
Iranian companies. Int. J. Inf. Manag. 29, 27–36 (2009)
52. Cragg, P., Zinatelli, N.: The evolution of information systems in small firms. Inf. Manag. 29
(1), 1–8 (1995)
53. Levy, M., Powell, P., Worral, L.: Strategic intent and e-business in SMEs: enablers and
inhibitors. Inf. Resour. Manag. J. 18(4), 1–20 (2005)
54. Davis, F.D.: User acceptance of information technology: system characteristics, user
perceptions and behavioral impacts. Int. J. Man Mach. Stud. 38(3), 475–487 (1993)
55. Premkumar, G.: A meta-analysis of research on information technology implementation in
small business. J. Organ. Comput. Electr. Commerce 13(2), 91–121 (2003)
56. To, M.L., Ngai, E.W.T.: The role of managerial attitudes in the adoption of technological
innovations: an application to B2C e-commerce. Int. J. Enterp. Inf. Syst. 3(2), 23–33 (2007)
57. Ghobakhloo, M., Arias-Aranda, D., Benitez-Amado, J.: Adoption of e-commerce applications
in SMEs. Ind. Manag. Data Syst. 111(8), 1238–1269 (2011)
58. Burke, K.: The impact of internet and ICT use among SME agribusiness growers and
producers. J. Small Bus. Entrep. 23(3), 173–194 (2010)
59. Bordonaba-Juste, V., Lucia-Palacios, L., Polo-Redondo, Y.: The influence of organizational
factors on e-business use: analysis of firm size. Market. Intell. Plan. 30(2), 212–229 (2012)
60. Higón, D.A.: The impact of ICT on innovation activities: evidence for UK SMEs. Int. Small
Bus. J. 30(6), 684–699 (2012)
61. Teo, T., Pian, Y.A.: Contingency perspective on internet adoption and competitive advantage.
Eur. J. Inf. Syst. 12(2), 78–92 (2003)
62. Hwang, H.S., Ku, C.Y., Yen, D.C., Cheng, C.C.: Critical factors influencing the adoption of
data warehouse technology: a study of the banking industry in Taiwan. Decis. Support Syst. 37
(1), 1–21 (2004)
63. Goode, S., Stevens, K.: An analysis of the business characteristics of adopters and
non-adopters of world wide web technology. Inf. Technol. Manag. 1(1), 129–154 (2000)
64. Buonanno, G., Faverio, P., Pigni, F., Ravarini, A., Sciuto, D., Tagliavini, M.: Factors affecting
ERP system adoption. A comparative analysis between SMEs and large companies.
J. Enterp. Inf. Manag. 18(4), 384–426 (2005)
65. Levenburg, N.M., Schwarz, T.V., Motwani, J.: Understanding adoption of internet
technologies among SMEs. J. Small Bus. Strateg. 16(1), 51–69 (2005)
52 R. Spinelli
66. Bayo-Moriones, A., Lera-López, F.: A firm-level analysis of determinants of ICT adoption in
Spain. Technovation 27(6), 352–366 (2007)
67. Porter, M.E., Millar, V.E.: How information gives you competitive advantage. Harvard Bus.
Rev. 63(4), 149–160 (1985)
68. Drew, S.: Strategic uses of e-commerce by SMEs in the East of England. Eur. Manag. J. 21(1),
79–88 (2003)
69. Özsomer, A., Calantone, R.J., Di Benedetto, A.: What makes firms more innovative? A look at
organizational and environmental factors. J. Bus. Ind. Market. 12(6), 400–416 (1997)
70. Oh, K.Y., Cruickshank, D., Anderson, A.R.: The adoption of e-trade innovations by Korean
small and medium sized firms. Technovation 29(2), 110–121 (2009)
71. Cragg, P., Caldeira, M., Ward, J.: Organizational information systems competences in small
and medium-sized enterprises. Inf. Manag. 48(8), 353–363 (2011)
72. Sarosa, S., Zowghi, D.: Strategy for adopting information technology for SMEs: experience in
adopting email within an indonesian furniture company. Electr. J. Inf. Syst. Eval. 6(2), 165–
176 (2003)
73. Igbaria, M., Zinatelli, N., Cragg, P., Cavaye, A.: Personal computing acceptance factors in
small firms: a structural equation model. MIS Q. 21(3), 279–305 (1997)
74. Pearson, J.M., Grandon, E.E.: An empirical study of factors that influence e-commerce
adoption/non-adoption in small and medium sized businesses. J. Internet Commerce 4(4), 1–
21 (2006)
75. Shih, H.: Contagion effects of electronic commerce diffusion: perspective from network
analysis of industrial structure. Technol. Forecast. Soc. Chang. 75(1), 78–90 (2008)
76. Oliveira, T., Martins, M.F.: Understanding e-business adoption across industries in european
countries. Ind. Manag. Data Syst. 110(9), 1337–1354 (2010)
77. Chapman, P., James-Moor, M., Szczygiel, M., Thompson, D.: Building internet capabilities in
SMEs. Logist. Inf. Manag. 13(6), 353–360 (2000)
78. Kshetri, N.: Barriers to e-commerce and competitive business models in developing countries:
a case study. Electron. Commer. Res. Appl. 6(4), 443–452 (2007)
79. OECD/ECLAC: Latin American Economic Outlook 2013: SME Policies for Structural
Change. OECD Publishing, Paris (2012)
80. Bentahar, Y., Namaci, L.: Identifying factors for the successful adoption of e-business by
SMEs in developing economies: the case of SMEs in Morocco. In: Proceedings of the 2010
World Conference of the International Council for Small Business, pp. 1–14 (2010)
81. Manuere, F., Gwangwava, E., Gutu, K.: Barriers to the adoption of ICT by SMEs in
Zimbabwe: an exploratory study in Chinhoyi District. Interdiscip. J. Contemp. Res. Bus. 4(6),
1142–1156 (2012)
82. Lal, K.: Determinants of the Adoption of E-Business Technologies. Telematics Inform. 22,
181–199 (2005)
83. Passerini, K., El Tarabishy, A., Patten, K.: Information technology for small business.
Managing the digital enterprise. Springer, New York (2012)
84. OECD: ICT, E-Business and SMEs, OECD Digital Economy Papers, 88. OECD Publishing,
Paris (2005)
85. Arduini, D., Nascia, L., Zanfei, A.: La diffusione delle ICT in Italia: determinanti a livello di
impresa e di sistema industriale. Economia e Politica Industriale 3, 177–192 (2006)
86. Chong, S.: Success in electronic commerce implementation. A cross-country study of small
and medium-sized enterprises. J. Enterp. Inf. Manag. 21(5), 468–492 (2008)
87. Al-Hudhaif, S., Alkubeyyer, A.: E-commerce adoption factors in Saudi Arabia. Int. J. Bus.
Manag. 6(9), 122–133 (2011)
88. Dyer, L.M., Ross, C.A.: Advising the small business client. Int. Small Bus. J. 25(2), 130–151
(2007)
89. Spinelli, R., Dyerson, R., Harindranath, G.: IT readiness in small firms. J. Small Bus.
Enterp. Dev. 20(4), 807–823 (2013)
90. Stockdale, R., Standing, C.: A classification model to support SME e-commerce adoption
initiatives. J. Small Bus. Enterp. Dev. 13(3), 381–394 (2006)
Technology Applied to the Cultural
Heritage Sector has not (yet) Exceeded
Our Humanity
1 Introduction
2 Theoretical Background
From an Information Systems perspective, many scholars have dealt with the
impact of Information Technologies on museums, producing an extensive literature
[4–6]. Nevertheless, the extant body of knowledge appears focused on technical
issues, such as the design and usage of the information technologies in museums
and their functionalities. Indeed, a complete analysis should not neglect the social,
organizational, and behavioral aspects that affect the cultural workers as well as the
audience. Both theories and practices show that the success of ICT implementation
relies upon a synergy between the technical factors and other factors that require an
in-depth understanding of the organizational context and human behavior [7, 8].
With a specific focus on Museums, the technology can provide greater efficiency
in the coordination of processes and facilitate the development of new activities that
can generate economic returns [9]. New technologies, therefore, promote innova-
tive managerial practices, organization structures and activities, and most impor-
tantly allow the development of new forms of communication and interaction with
users. Nevertheless, organizational and IS literature is still immature and some
research questions remain unanswered, related to museums and new technology
from a managerial and organizational point of view. One path of research appears to
be particularly poorly beaten. This is referred to the ICT-enabled change man-
agement, and in particular to the redefinition of roles and capabilities within the
system of cultural organizations. Human actors adopt and use technologies in
multiple ways, and cultural actors may shape the implications of technologies as
they integrate them into everyday practice. Prolific streams of research have
flourished in the organizational and IS literature, dealing with the dual nature of
technology (e.g. [10]). Embracing a structuration approach, several Authors have
emphasized that the usage and adoption of technology are linked to the context in
which such technologies are immersed, as well as to the social processes. In par-
ticular, IT is central in the structuration process [11], as IT is seen both as the result
of human actions in a specific social context and as a bundle of rules and resources
embedded in the human actions. Based on the same epistemology, the concept of
sociomateriality [12] has been developed to address the interconnections between
social and technical components and the so-called relational ontology [13, 14]. The
materiality identifies the structural characteristics of technology that do not change
over space and time. The users react to materiality when they adopt the technology
from artifact to social.
Although several streams of research have focused on the adoption and usage of
technology in any given cultural organization [15–17], to the best of our knowledge
little has been written on these issues from a sociomateriality perspective.
Moreover, as the development of inter-organizational networks can deploy inno-
vative ways to valorize the cultural heritage and to pursue sustainable managerial
models in the cultural fields [3, 18], we believe that technology adoption and usage
should be investigated from a network perspective. This implies including in the
analysis not only a focal cultural organizations, but all the actors that have ties with
56 L. Marchegiani and G. Rossi
it. Following this approach, we focus on a group of actors that operate as bridges
between the cultural artifacts and the audience [19]. The research focus is main-
tained on the guides’ opinions and perceptions. Our research questions are the
following: how does each guide perceive changes in her role in the overall cultural
heritage system, with respect to a given technological innovation? ; and how does
she sense her contribution to the visitors’ cultural experience?
3 Methodology
68 % of the tour guides, who responded to the questionnaire, carries out its activities
in Rome, while 20 % of them work in other regions, in particular, the majority came
from Tuscany, Sicily and Campania. Tourist guides pointed out that the most
requested period is between the months of May and June (36 %) and of September
and October (29 %), although for many others it was difficult to identify only two
options, as the period actually highlighted by respondents is more extensive, in fact,
it goes from March until October. The two possible targets that most require a guided
tour are the young people (30%) and the foreign tourists (28 %).
Technology Applied to the Cultural Heritage Sector … 57
After identifying the main characteristics of the reference sample and analyzing
the trend of demand for cultural tourism and technological capabilities of the tour
guides, the analysis of subsequent blocks will allow us to answer to the first
question of our research. Specifically, the third block of questions was aimed to
investigate the relationship that the touristic guides have with technology and their
expectations regarding its possible use during the visits. In particular the following
question: “What is your relationship with technology?” aimed at investigating
guides’ opinion on a Likert scale from 1 to 5, in order to classify the respondents’
general familiarity with technology. The average response was around 3. So,
respondents did not have a bad relationship with technology, but they are not even
the first users of technological innovations, probably this is due to the great age
variance of the target reached by this research. The next questions aimed at ana-
lyzing the specific expectations that different guides have on the use of technology
within a cultural visit. In particular, it was asked whether the new technologies
make the visit more interactive, fascinating, exciting, educational, boring, super-
fluous, unreal or distracting. The radar chart below detects this aspect (Fig. 1). An
analysis of the responses showed that the technology applied to cultural visits, in
the eyes of the respondents, is considered by the minority: boring, unnecessary,
unrealistic and distracting (with low average), on the other side positive adjectives,
such for example, educational, fascinating and exciting, may not exceed the value
place in the middle of scale taken into consideration.
This is probably due to the fact that this phenomenon is still poorly understood
and effectively developed in museums and most of guides have some concerns
about the use of technology applied to cultural heritage. Thus, although the phe-
nomenon may somehow be attractive or be considered interesting, the vast majority
of respondents expressed positive feedback too. The last question of this block
analyzed their opinion about the use of a technological device during a guided tour.
Even in this case, as in the previous one, the average of valuations does not extend
towards extremely positive ratings, but relatively low values indicate that the device
does not overpower the art. Among the options suggested, the highest average
believes that the technological device is a useful support to art.
1.9 2.9
1.7
1.6
3.4
58 L. Marchegiani and G. Rossi
22%
32%
5%
2%
6%
12%
21%
The next block of questions has the objective of investigating the knowledge and
experience they touristic guides have in the field of technology applied to cultural
heritage, focusing on augmented reality (Fig. 2). According to the touristic guides
in the sample curiosity is the most cited (32 %) among the different motivations that
drive a tourist to use technological support during a visit, followed by the newness
(21 %), and the availability of on-site media device (22 %).
Nowadays, the use of technology applied to cultural heritage is not seen as an
incentive tool, which could increase substantially the demand for cultural tourism.
The following block of questions in the questionnaire focuses the attention more
specifically to the different expectations that tourist guides have about a possible
relationship between technology and art and knowledge.
The results (Fig. 3) show that although there is still a lot of skepticism about the
use of technology, people don’t believe that this can actually have negative effects
on cultural visits. With respect to the effect that the technology has on the pro-
fessional role of a tourist guide, in fact, there are different views that emerged from
1,95
1,54
the analysis (Fig. 4). The majority of respondents (about 78 %) says that the
technology has not confined the role of tour guide in the sphere of a mere support
(average about 1.95).
At the same time, the results do not strongly indicate positive judgments about
the rise of a new professional tourist guide thanks to the use of the augmented
reality. This is probably due to the difficulty in seeing an opportunity to grow
professionally in the technology itself and the fact that today in Italy there is yet
little development of technology applied to cultural heritage, which can actually
affect in some way, positively or negatively, the opinion of the respondents.
After studying the opinions and expectations of touristic guides about the use of
technology within cultural organizations, the goal of this research was the analysis
of the degree of satisfaction about it. To do this, we ran a regression model that
included the guides’ overall satisfaction as the dependent variable. The independent
variables are: the ability to capture the attention of the guide, the ease of use of the
technological device/technology used, the cultural preparation of the visitors, cul-
tural, professional and personal guidance, the museum’s prestige, the skill in the use
of the guide of technology, the technological competence of the visitors and the
effective media sponsorship of the visit by the museum.
The regression analysis (Fig. 5) highlighted that the elements actually impacting
on satisfaction mainly concern the capacity and skills of a touristic guide and only
for one aspect of the technology, in particular the ease of its use. This confirms what
has emerged from the interviews. Almost every guide had in fact shown that the
cultural and professional guidance was the basic element on which to focus for a
good and efficient visit, since users perceive the use of technology as “viable” only
at marginal levels.
60 L. Marchegiani and G. Rossi
Step 1 2 3 4 5
Constant 2,0027 1,4108 1,1254 1,0494 0,9000
Museum’s P 0,090
T-Value 2,20
P-Value 0,029
5 Conclusions
It is possible to draw several conclusions from our results. Many figures seem to be
in line with what has already been expressed by some guides or by industry experts
in the exploratory phase of our analysis; whilst others, that we might call
counter-intuitive, can be used to infer innovative insights and contribute to this field
of research. Our sample was poorly homogeneous, taking into account a wide age
group of 21–77 years. They have different working experience, as someone has
worked in this field for only a year and others for more than 45 years. This has
made the analysis of Italian tourism market higher representative. The first difficulty
in the use and appreciation of technology applied to the art, probably, lies in the
generation gap, which sees experienced guides who refused to give up the tradi-
tional methods used for a long period of time. At the same time a large part of the
sample taken into account showed adequate familiarity of technology. This is
probably due to the fact that the composition of the sample included a majority of
guides of young age.
Technology Applied to the Cultural Heritage Sector … 61
References
1. Kalay, Y., Kvan, T., Affleck, J. (eds.): New heritage: New media and cultural heritage.
Routledge (2007)
2. Corradini, E., Campanella, L.: The multimedia technologies and the new realities for
knowledge networking and valorisation of scientific cultural heritage. The role of the Italian
University Museums network. In: Marchegiani, L. (ed.): Proceedings of the International
Conference on Sustainable Cultural Heritage Management. Societies, Institutions, and
Networks, pp. 283–297. ROMA: Aracne (2013)
3. Salvemini, S., Soda, G.: Artwork and Network. Reti Organizzative e Alleanze per lo Sviluppo
dell’industria Culturale, Egea (2001)
62 L. Marchegiani and G. Rossi
4. Keene, S.: Becoming digital. Museum Management and Curatorship, vol. 15, no. 3, pp. 299–
313, Taylor & Francis, Singapore (1996)
5. Ippoliti E., Meschini A.: Media digitali per il godimento dei beni culturali, in Disegnarecon,
vol. 4, no. 8 (2011)
6. Morrissey, K., Worts, D.: A place for the muses? Negotiating the role of technology in
museums. In: Thomas, S., Mintz, A. (eds.) The Virtual and the Real: Media in the Museum
(1998)
7. Markus, M.L., Robey, D.: Information Technology and Organizational Change: Causal
Structure in Theory and Research, Management Science (1988)
8. Marty, P. F.: The changing nature of information work in museums. J. Am. Soc. Inform. Sci.
Technol. 58(1) (2007)
9. Marchegiani, L. (ed.): Proceedings of the International Conference on Sustainable Cultural
Heritage Management. Societies, Institutions, and Networks. ROMA: Aracne (2013)
10. Orlikowski, W.J.: Using technology and constituting structures: a practice lens for studying
technology in organizations. Organ. Sci. 11(4) (2000)
11. Orlikowski, W.J., Robey, D.: Information technology and the structuring of organizations. Inf.
syst. Res. 2(2), 143–169 (1991)
12. Leonardi, P.: Theoretical foundations for the study of sociomateriality. Inf. Organ. 23(2), 59–
76 (2013)
13. Leonardi, P.M., Barley, S.R.: What’s under construction here? social action, materiality and
power in constructivist studies of technology and organizing. Acad. Manag. Ann. (2010)
14. Orliwkowski, W.J., Scott, S.V.: Sociomateriality: challenging the separation of technology,
work and organization. Acad. Manag. Ann. (2008)
15. Sepe, M., Di Trapani, G.: Cultural tourism and creative re-generation: two case studies. In:
International Journal of Culture, Tourism and Hospitality Research, vol. 4, no. 3, pp. 214–227,
received December 2009, accepted March 2010
16. Sher, P.J., Lee, V.C.: Information Technology as a facilitator for enhancing dynamic
capabilities through knowledge management. Inf. Manag. 41(8), 933–945 (2004)
17. Sparacino, F.: The Museum Wearable: real-time sensor-driven understanding of visitors’
interests for personalized visually-augmented museum experiences. In: Proceedings of:
Museums and the Web (2002)
18. Dubini P., De Carlo M.: Integrating Heritage Management and Tourism at Italian Cultural
Destinations, Int. J. Arts Manag. 12(2), (2010)
19. Bagdadli S., Dubini P., Sillano M.T., Landini R., Mazza C., Tortoriello M.: Nuove
professionalità: progettisti per lo sviluppo di sistemi culturali integrati, Rapporto di Ricerca,
CRORA – Università Bocconi (2000)
20. Venkatesh V., Davis F., A theoretical extension of the technology acceptance model: four
longitudinal field studies. Manag. Sci. 46(2) (2000)
21. Child J., Mcgrath R.G.: Organizations unfettered: organizational form in an
information-intensive economy. Acad. Manag. J. 44(6) (2001)
22. Fahy A.: Musei d’arte e tecnologie dell’informazione e della comunicazione. In: Bodo S. (ed.)
Il museo relazionale. Riflessioni ed esperienze europee, Torino, Fondazione Giovanni Agnelli
(2003)
The Impact of the Implementation
of the Electronic Medical Record
in an Italian University Hospital
Abstract In the last years the use of the information communication technology
(ICT) has become a leading driver of managerial reform in the public sector [1] and
in particular in the healthcare system [2]. In particular, the Electronic Medical
Record (EMR) is one of the most studied ICT systems in the healthcare manage-
ment literature. Using the Zaharia et al. model [3], in this study we investigate the
implementation of a core element of the EMR, in an university hospital, the
deployment of which is expected to spur internal efficiency and pave the way for
the development of the principles in other departments and/or hospitals. It then
analyses the organizational impacts of EMRs on the healthcare provider’s structure.
1 Introduction
In the last years the use of the information communication technology has become a
leading driver of managerial reform in the public sector [1] and in particular in the
healthcare system [2]. In particular, in the last three years, the Electronic Medical
Record (EMR) is one of the most studied ICT systems in the healthcare manage-
ment literature. However, there is not a unique definition of EMR, because it is
depend on the healthcare system, so it is quite different from country to country. In
particular, there are a lot of researchers [4–7] that highlight the negative impact of
the EMR in the American healthcare system. Sinsky et al. [8, pp. 728] emphasized
A. Zardini (&)
Business Administration Department, University of Verona, Via Dell’Artigliere 19, 37129
Verona, Italy
e-mail: alessandro.zardini@univr.it
A. Zardini C. Rossignoli B. Campedelli
Department of Business Administration, University of Verona, Verona, Italy
these concerns when they wrote that: “after a decade of growth in the use of EHRs
(Electric Health Record) that has been both promising and painful, we believe it is
time to step back and develop principles for their design, implementation, and
regulation that support higher value primary care”. Unfortunately, the authors
identified only general principles that they are not so useful, because the US hos-
pitals are competitors and they do not want to share patient information. Hence, in
USA it is not easy to develop a shared EMR.
In Italy, the situation is completely different because there is a public healthcare
system. Hence, the hospitals are public and they are not in competitions, but there
are other issues. Nowadays, every regions define the EMR principles, so (in theory)
there are 21 different EMR systems. Moreover, only few hospitals had implement
or are implementing the Electronic Medical Record.
In order to understand what are the main principles, in this paper we used the
Zaharia et al. [3] model, re-elaborated by Buntin et al. [2], and we identified and
categorized the positive impact and the critical factors generated by the imple-
mentation of the Electronic Medical Record in a general medicine department in an
Italian university hospital.
Hence, the paper aims to respond to the following research questions: What are
the positive impacts and the critical factors of introducing EMR in a general
medice department? What factors influence the implementation process?
In the first part, we proceed with the literature review, after we illustrated the
research methodology and approach. It then analysed the introduction of EMRs to
an Italian university hospital and evaluated its impact on the hospital’s organisation.
The paper closes with the authors’ conclusions.
Over the past few years, Information Technology (IT) has become a leading driver
of managerial reform in the public sector [1] and in particular in the healthcare
system [2]. Technology is reshaping organizations by blending their Information
Systems with rapidly advancing information and communication technology [9,
10], and it is becoming the catalyst factor for economic growth [2].
Hence, private-sector companies deploy ICT solutions to optimise organisational
performance precisely because of its potential to reduce transaction and agency
costs (principal–agent issues), but also to rationalise their business processes [11,
12]. The introduction of ICT to the public sector is expected to produce similar
results [10]. These are highlighted by Smith et al. [13, pp. 491], who write that “the
impact of Electronic Medical Records sophistication on financial performance
indicate that EMR sophistication is associated with improved revenue cycle man-
agement, and increased ‘Days Cash on Hand’ (DCOH)”.
On the other hand, some academics [1, 4] identified that for the majority of
practices, the return on investment of the EMR was negative, particularly for
smaller practices. Dey et al. [6, pp. 90] reinforce the previous thesis, saying that:
The Impact of the Implementation of the Electronic … 65
“Simply incentivising health care service providers to move up the stages of EMR
capability may not lead to the realization of the potential benefits of the higher
stages of EMR capability. The practical implication of this finding is that health
care service providers need to assess whether their choice of a stage of EMR
capability is commensurate with their idiosyncratic technological, organizational,
and environmental contexts characteristics before committing to a stage of EMR
capability”. Hyman [7] emphasizes these concerns in a paper titled: “The Day the
EHR Died”.
Unlike the previous authors, Bardhan and Thouin [14, p. 442] argue that
‘spending on health IT does matter … and it is important to measure quality
outcomes at the process level, and not only at an aggregate institutional level’. The
authors conclude by saying that the adoption of EMR within US hospitals generates
benefits for both patients and clinics.
As underscored by Hannan [15], the medical record should be the main
‘repository’ of the patient’s medical information, as it not only supports clinical
decisions, but is also a useful tool for other healthcare-related services (adminis-
trative, insurance, quality, epidemiology and so forth). As a result of the close
relationship between medical decisional processes, data accumulation, healthcare
costs and the quality of the health service [16], the quality of clinical treatment, the
efficiency of the health service and the health of citizens call for a medical record
that is an effective decisional-support tool [15, 17]. The EMR is such a tool [18]
because it enables immediate access to encoded and standardised patient infor-
mation and ‘more active decision support’ [19, p. 3] through the alerting, inter-
pretation, assisting, critiquing, diagnosing and management functions [15, 18].
All these benefits are summarized by Shaw [16, p. 200] that re-elaborated the
Schoen et al. [20] model, and he defines the EMR core features as: “the electronic
ordering of tests, electronic access to patients’ test results, electronic prescribing of
medication, electronic alerts for drug interaction, and the electronic entry of clinical
notes. Beyond these core capabilities, physicians may extend features by per-
forming searches on their patient population, creating templates to speed their entry
of notes, set reminders for medical tests, and ensure that non-electronic data are
scanned and linked electronically to the patient record”.
An other important point is that in the literature, there is not an unique definition
of Electronic Medical Records, but it depends on the national healthcare systems
model. Hence, sometimes there is an other issue because the EMR and the EHR are
considered interchangeable terms [21] and comprise all the previous conceptual-
izations [22]; in fact “other similar interpretations exist, albeit with a sometimes
slightly restricted focus” [23, p. 1]. Otherwise in this paper, we cannot interchange
these two terms, because in the Italian Healthcare System they are different.
In this way, we can define EMR as ‘computerized medical information systems
that collect, store and display patient information [24]. They are a means to create
legible and organized recordings and to access clinical information about individual
patients’ [21, pp. 129]. They provide an effective, active decisional-support system,
whether the decisions regard healthcare or management, [15, 18, 19, 25]. A hospital
organisation can expect EMRs to generate key benefits, including enhanced quality
66 A. Zardini et al.
3 Case Study
The Alfa university hospital is one of the largest healthcare providers and is
composed of two facilities. The two facilities combined treat an average of 60,000
inpatients per year, 10,000 of whom come from other Italian regions. Daily
admittances total 1,300 for ordinary stays and approximately 400 for day hospitals.
The goal is to automate and computerise the most important organisational
The Impact of the Implementation of the Electronic … 67
processes, the number and complexity of which are far higher than most other
healthcare providers [5].
The EMR is one of the projects currently being developed and implemented by
Alfa. One of the main components of the Electronic Health Record (EHR) is the
EMR, the repository for all the internal information generated by the hospital’s
individual organisational units. Thanks to Gekos system, hospital physicians are
able to view al lot of data, such as: laboratory test values, RX picture, TAC picture,
old documents, and other patients’ data.
However, they are not able to insert, modify or delete data.
the interim results of the data-collection phase. The authors used Atlas.ti Computer
Assisted Qualitative Data Analysis Software (CAQDAS) to analyse the data
because it enables organisation and summarisation by concept (for example,
improved collaboration, system adequacy and error reduction). Data collection
commenced in November 2013 and continued for approximately four months. The
analysis and integration of the existing data began in April 2014.
As mentioned earlier, in this paper we analysed the impact of the EMR using the
model presented by Zaharia et al. [3], that it was re-elaborated and improved by
Buntin et al. [2]. In the Table 1, we summarized the main factors (nine codes) that
we found during the data analysis and we categorized them in the three categories,
or challenge types, proposed by previous authors [2, 3]. Some of these codes are
reported in the literature, and they influence the impact of the introduction of a new
Electronic Medical Record system.
In particular, in the organizational challenge category, there are five codes,
where two of them the had a positive impact on the organization (reduction of
errors, and knowledge sharing), whereas the others had a negative impact on it.
An important aspect identified by the analysis is the perception of the respon-
dents (10 on 11) of a significant reduction in errors compared with the past. The
interviewed recounted how the former paper-based procedure was more prone to
errors (imprecise requests, imprecise/unreadable medical report, potential misun-
derstandings and the illegibility of handwritten notes). Today, the higher level of
How well highlighted by Zaharia et al. [3] and Buntin et al. [2], the use of
inappropriate technologies can decrease the quality and the reach of both infor-
mation and communication and it can cause the failure of the projects that introduce
the EMR in this hospital [38, 39].
At the end, in the challenge people, we found the last two codes. Eight of the
eleven informants made specific mention of the leadership adequacy aspect,
underscoring the lack of a clear and established organisational leadership in the
implementation process adopted by this hospital. According to informant no.
4 (physician): “there was no leadership, everything was left to the initiative of a few
people. Nobody asked us, what are our needs, and how we can customize the EMR
in order to be useful, and so on. Moreover, we do not have a trained project
manager, someone who has goals to pursue.
However, the new system has also generated a benefit: the enhanced collabo-
ration between the various organisational actors involved in the process. The
computerisation and standardisation of the procedures have improved the level of
interaction and collaboration, which translates into an activity of comparison and
discussion that can optimise the organisational and work practices of the various
units. The interviewed 3 (physician) explained that: “I think that thanks to the EMR,
I can better collaborate with my colleagues and I can share more data with them
(other specialists). Moreover, the team works are better, because we can better
define what are our tasks, thereby improving the coordination process. Now we
have to implement an EHR, in order to share data/information with the other
hospitals”.
6 Conclusions
In this paper, we analysed the impact and critical factors in implementing a new
Electronic Medical Record in the general medice department of an Italian university
hospital, which represents a particularly complex healthcare structure. In particular,
in order to highlight positive and negative factors, we used the model of Zaharia
et al. [3], that it was re-elaborated by Buntin et al. [2]. According to the previews
model, we subdivided the main codes in three categories (organizational, techno-
logical, and people).
The following codes are the positive impact that we noted:
• a reduction in the number of flaws and errors (imprecise requests,
imprecise/unreadable medical report, potential mis-understandings and the
illegibility of handwritten notes).
• Faster access to clearer and more specific information, enabling physicians to
diagnose patients more promptly.
• Knowledge sharing helps physician, nurse and medical specialist to better
analyse patient information and to find the most appropriate treatment.
The Impact of the Implementation of the Electronic … 71
References
1. Moon, M.J.: The evolution of e-government among municipalities: rhetoric or reality? Public
Adm. Rev. 62(4), 424–433 (2002)
2. Buntin, M.B., Burke, M.F., Hoaglin, M.C., Blumenthal, D.: The benefits of health information
technology: a review of the recent literature shows predominantly positive results. Health Aff.
30(3), 464–471 (2011)
3. Zakaria, N., Affendi, M., Yusof, S., Zakaria, N.: Managing ICT in healthcare organization:
culture, challenges, and issues of technology adoption and implementation. In: Zakaria N.,
Affendi, S., Zakaria N. (eds.) Managing ICT in Healthcare Organization: Culture, Challenges,
and Issues of Technology Adoption and Implementation. pp. 153–168, IGI Global (2010)
4. Adler-Milstein, J., Green, C.E., Bates, D.W.: A survey analysis suggests that electronic health
records will yield revenue gains for some practices and losses for many. Health Aff. 32(3),
562–570 (2013)
5. Moore, K.D., Eyestone, K., Coddington, D.C.: Costs and benefits of EHRs: a broader view.
J. Healthc. Financ. Manage. Assoc. 67(4), 126–128 (2013)
72 A. Zardini et al.
6. Dey, A., Sinha, K.K., Thirumalai, S.: IT capability for health care delivery: is more better?
J. Serv. Res. 16(3), 326–340 (2013)
7. Hyman, P.: The day the EHR died. Annu. Intern. Med. 160(8), 576–577 (2014)
8. Sinsky, C.A., Beasley, J.W., Simmons, G.E., Baron, R.J.: Electronic health records: design,
implementation, and policy for higher-value primary care. Ann. Intern. Med. 160(10), 727–
728 (2014)
9. Frenzel, C., Frenzel, J.: Management of information technology (4th edn), Cengage Learning,
Boston, USA (2004)
10. Bekkers, V.: Reinventing government in the information age: international practice in
IT-enabled public sector reform. Public Manag. Rev. 5(1), 133–139 (2003)
11. Braccini, A.M., Federici, T.: IT value in public administrations: a model proposal for
E-Procurement. In: D’Atri A., Saccà D. (eds.) Information Systems: People, Organizations,
Institutions and Technologies, pp. 121–129. Springer, Berlin (2009)
12. Depaoli, P., Za, S.: Towards the redesign of e-Business maturity models for SMEs. In:
Baskerville, R., De Marco, M., Spagnoletti, P. (eds.) Designing Organizational Systems,
pp. 285–300. Springer, Berlin (2013)
13. Smith, A.L., Bradley, R.V., Bichescu, B.C., Tremblay, M.C.: IT governance characteristics,
electronic medical records sophistication, and financial performance in U.S. hospitals: an
empirical investigation. Decis. Sci. 44(3), 483–516 (2013)
14. Bardhan, I.R., Thouin, M.F.: Health information technology and its impact on the quality and
cost of healthcare delivery. Decis. Support Syst. 55(2), 438–449 (2013)
15. Hannan, T.J.: Electronic medical records. Health informatics: an overview, Churchill
Livingstone, Australia (1996)
16. Shaw, N.: The role of the professional association: a grounded theory study of electronic
medical records usage in Ontario, Canada. Int. J. Inf. Manage. 34(2), 200–209 (2014)
17. Lakshminarayan, K., Rostambeigi, N., Fuller, C.C., Peacock, J.M., Tsai, A.W.: Impact of an
electronic medical record-based clinical decision support tool for Dysphagia screening on care
quality. Stroke 43(12), 3399–3401 (2012)
18. McDonald, C.J.: The barriers to electronic medical record systems and how to overcome them.
J. Am. Med. Inf. Assoc. 4(3), 213–221 (1997)
19. Berner, E.S., Detmer, D.E., Simborg, D.: Will the wave finally break? A brief view of the
adoption of electronic medical records in the United States. J. Am. Med. Inf. Assoc. 12(1), 3–7
(2005)
20. Schoen, C., Osborn, R., Doty, M.M., Squires, D., Peugh, J., Applebaum, S.: A survey of
primary care physicians in eleven countries, 2009: perspectives on care, costs, and
experiences. Health Aff. 28(6), 1171–1183 (2009)
21. Ajami, S., Bagheri-Tadi, T.: Barriers for adopting electronic health records (EHRs) by
physicians. Acta Informatica Med. 21(2), 129–134 (2013)
22. Häyrinen, K., Saranto, K., Nykänen, P.: Definition, structure, content, use and impacts of
electronic health records: a review of the research literature. Int. J. Med. Inf. 77(5), 291–304
(2008)
23. Boonstra, A., Broekhuis, M.: Barriers to the acceptance of electronic medical records by
physicians from systematic review to taxonomy and interventions. BMC Health Serv. Res. 10
(231) (2010)
24. Wang, S.J., Middleton, B., Prosser, L.A., Bardon, C.G., Spurr, C.D., Carchidi, P.J., Kittler, A.
F., Goldszer, R.C., Fairchild, D.G., Sussman, A.J., Kuperman, G.J., Bates, D.W.: A cost–
benefit analysis of electronic medical records in primary care. Am. J. Med. 114(5), 397–403
(2003)
25. D’Urso, P., De Giovanni, L., Spagnoletti, P.: A fuzzy taxonomy for e-Health projects. Int.
J. Mach. Learn. Cybern. 4(6), 487–504 (2013)
26. Hunt, D.L., Haynes, R., Hanna, S.E., Smith, K.: Effects of computer-based clinical decision
support systems on physician performance and patient outcomes: a systematic review. J. Am.
Med. Assoc. 280(15), 1339–1346 (1998)
The Impact of the Implementation of the Electronic … 73
27. Basaglia, S., Caporarello, L., Magni, M., Pennarola, F.: Individual adoption of convergent
mobile technologies in Italy. In: D’Atri, A., De Marco, M., Casalino, N. (eds.)
Interdisciplinary aspects of Information systems studies: the Italian Association for
Information systems, pp. 63–69. Physica-Verlag, Heidelberg (2008)
28. Caporarello, L., Viachka, A.: Individual readiness for change in the context of enterprise
resource planning system implementation. In: Proceedings of the 6th Conference of the Italian
Chapter for the Association for Information Systems, pp. 89–96 (2010)
29. Cavaye, A.L.M.: Case study research: a multi-faceted research approach for IS. Inform. Syst.
J. 6(3), 227–242 (1996)
30. Creswell, J.W.: Qualitative Inquiry & Research Design: Choosing Among Five Approaches.
Sage Publications, Thousand Oaks (2007)
31. Yin, R.K.: Case Study Research: Design and Methods, 3rd edn. Sage Publications, Los
Angeles (2009)
32. Darke, P., Shanks, G., Broadbent, M.: Successfully completing case study research:
combining rigour, relevance and pragmatism. Inf. Syst. J. 8(4), 273–289 (1998)
33. Sorrentino, M.: Interpreting e-government: implementation as the moment of truth. In:
Wimmer, M.A., Scholl, J., Grönlund, A. (eds.) Electronic Government, pp. 281–292. Springer,
Berlin (2007)
34. Benbasat, I.: An analysis of research methodologies. In: Warren, F. (ed.) The Information
Systems Research Challenge, pp. 47–85. Harward Business School Press, Boston (1984)
35. Arksey, P., Knight, T.: Interviewing for Social Scientists. Sage Publications, London (1999)
36. Kucukyazici, B., Keshavjee, K., Bosomworth, J., Copen, J., and Lai, J.: Best practices for
implementing electronic health records and information systems. In: Kushniruk, A.W.,
Borycki, E.M. (eds.) Human and social aspects of health information systems, IGI Global,
Hershey, PA (USA), pp. 120–138 (2008)
37. Heeks, R.: Health information systems: failure, success and improvisation. Int. J. Med. Inf. 75
(2), 125–137 (2006)
38. Castillo, V., Martinez-Garcia, A., Pulido, J.: A knowledge-based taxonomy of critical factors
for adopting electronic health record systems by physicians: a systematic literature review.
BMC Med. Inf. Decis. Making 10(1), 60
39. Pennarola, F., and Caporarello, L.: Enhanced Class Replay: Will this turn into better learning?,
In: Wankel, C., Blessinger, P. (eds.) Increasing Student Engagement and Retention Using
Classroom Technologies: Classroom Response Systems and Mediated Discourse
Technologies, pp. 143–162. Emerald Group Publishing Limited, Bingley (2013)
40. Scott, J.T., Rundall, T.G., Vogt, T.M., Hsu, J.: Kaiser Permanente’s experience of
implementing an electronic medical record: a qualitative study. Brit. Med. J. 331, 1313–
1316 (2005)
Technological Cycle and S-Curve:
A Nonconventional Trend
in the Microprocessor Market
Abstract In the literature there is agreement on the fact that battles between two
technologies sooner or later end with the dominance of one over the others, or,
under certain conditions, with their coexistence. The aim of this paper is to
understand if competition between rival technologies can be reopened after one
technology dominates the market. We argue that, if a technology has prevailed this
could not be a static situation, but rather a dynamic one. In doing so, we have
analyzed the microprocessor market, finding a nonconventional S-curve trend.
1 Introduction
In the 1942 Schumpeter coined the term creative destruction to denote a “process of
industrial mutation that incessantly revolutionizes the economic structure from within,
incessantly destroying the old one, incessantly creating a new one” [1]. Literature has
grown up following this revolutionary intuition, and some scholars have focused on
the determinants that permit the emergence of a technology over the others, defining
the technological cycle, which consists in three phases: technological discontinuity,
era offerment and establishment of dominant design [2]. The advent of a technological
discontinuity, in product or process, can disrupt an existing technological regime,
eventually leading to a new one. The period between the discontinuity and the
establishment of the new regime is a period of technological ferment, with high
uncertainty as both new and existing firms seek to identify which technologies,
markets and capabilities will be most valuable in the new regime. This is the period of
most rapid improvement in product performance, as technologists discover and
advance the capabilities of the new regime, and also the period where even incumbents
are unlikely to achieve economies of scale due to rapidly changing designs and
technologies [3]. Several versions of breakthrough technology appear, because the
technology is not well understood and each pioneering firm has an incentive to dif-
ferentiate its variant from rivals. The era of ferment may persist for up to 20 years
before a technology prevails, and several standards may compete for years, even
decades, without one technology being locked as a dominant design [2, 4]. Thus, two
or more technologies may coexist under certain conditions, for instance some stay in
their niche, while others go on to penetrate mainstream segments and compete with
incumbent technologies [5]. There are not examples of technologies initially beaten
that subsequently subvert the dominant paradigm. Hence, this paper has the ambition
to explore if the technological adoption follows the same trend as we know from the
literature, or if in some markets it modifies its trajectories. Thus, the aim of this paper is
to understand if the battle for dominance between two rival technologies can be
reopened with a new era of ferment. In other words, we argue that if a technology has
prevailed over the others, this could not be a static situation but rather a dynamic one.
Answer this question is a great challenge, because if the answer is yes, we will have to
rethink if the technological cycle ever follows the same trend. In doing so, we analyze
the microprocessor market where it appears that our assumption would be confirmed.
While factors of dominance have been explored by a great amount of literature,
nothing has been said on this question. So, we think that investigate on this point could
open new ways to better understand determinants of innovation and open new
implications. The paper is structured as follows: the second paragraph presents a
literature review about the technology life cycle. The third paragraph is devoted to the
study of the microprocessor market. The fourth part explores evidences from smart-
phone and tablet markets, followed by the discussion paragraph, which identifies the
management implications and main limitations. Finally, conclusions propose an
indication of possible developments for further research.
Firms need to be able to position technologies within their life cycle, and to
understand the specific implications of this for managerial decisions [6]. Even if a
clear conceptualization of the life cycle of a technology is difficult, the Anderson
and Tushman’s technology evolution model (1990) is a central perspective and
represents the foundation of to the “macro view” of the technology life cycle. The
macro view considers individual technology cycles, each of which begins with a
period of technological discontinuity, characterized by advances in a process or in a
product that immediately lead to a second cycle, the period of ferment. This era sees
the competition among different variations of the original technology, and it is
divided into two phases, substitution and design competition [7]: once the superi-
ority of the new technologies has been demonstrated, they rapidly substitute the
older and the design competition begins. Then, when a technology is widely
adopted and associated with changes in the nature of competition within the
Technological Cycle and S-Curve … 77
corresponding industry, the design competition ends with the emergence of the
dominant design. It usually involves a synthesis of available technologies, resolu-
tion of competing technological standards, and perceptions of closure by user
groups [8]. This period could be followed by an era of incremental evolution of the
selected technology, characterized by evolutionary, continuous and incremental
changes, until a further technological discontinuity, when a new cycle begins. This
cyclical process of technological change is what Schumpeter (1934) named “cre-
ative destruction” [9]. Although there is a general agreement that the Anderson and
Tushman’s model concerns innovations of both products and processes, the
emphasis changes between these during the cycle. Indeed, during the era of ferment
the focus is on the product technology with the emergence of a dominant standard,
while in the era of incremental change greater emphasis is placed on the devel-
opment of processes that will improve the product technology [6]. The dominant
design needs not to be the best available, it needs only to gain a widespread
acceptance. An inferior one can win and, in this way, scholars have appealed to a
variety of factors explaining why a particular design rather than other ones emerges
as the dominant. In reviewing the dominant design literature, five groups of causal
mechanisms have been classified [10]: the technological predominance among
different functional characteristics of a technology; the economies of scale that can
be realized with standardized products; network externalities and their effects
(path-dependent processes); firms strategies; combination of historical, sociological,
political and organizational dynamics. Among these, economies of scale and net-
work externalities are the two conditions that create dynamic increasing returns and
even the design with the small lead will inexorably win a dominant position if
higher returns can be achieved with it. In particular, network externalities generates
when the utility that a user derives from consumption of the good increases with the
number of other agents consuming the good, who are in the same “network”. The
possible sources of network externalities could be direct physical effects, indirect
effects (e.g. the hardware-software paradigm) and post-purchase services [11].
Studying the process by which a technology achieves dominance when battling
against other technological designs, two broad groups of factors influencing the
outcome have been classified [13]: firm level factors and environmental factors.
There are a number of examples regarding the emerging of a technology over
another; among these, the most meaningful and cited are VHS versus Betamax [14]
and QWERTY versus other keyboards layout [15]. In the first case a better format
usability, the additional time available for recording and the widespread diffusion of
movie shops adopting the format increased the preference of VHS instead of the
better quality that characterized the Betamax format. In the second case, the first
product available with a new technology dominated most of the market; this is a
good example of lock-in and path-dependence caused by dynamics that go beyond
the behaviors of individuals, and show that, when a new technology is introduced
and spread so largely and quickly, it is quite impossible to come back to the old
one. The market diffusion of a technology is plotted by the S-curve [16], whose
common interpretation considers the cumulative adoption of the technology over
time, envisioning a number of phases such as embryonic, growth, maturity and
78 G. Ennas et al.
ageing. There are also alternative interpretations, but however plotted, the S-curve
reach saturation at maturity, when a new disruptive technology may emerge to
replace the old one. This period of technological discontinuity is characterized by
competing technologies with their own S-curve, which could be connected or
disconnected each other, in relation to the higher rate of performance. The resulting
situation is a technology progression characterized by multiple S-curves or tech-
nology cycles occurring over time [6]. Some scholars pointed out that the period of
ferment may indefinitely extend and not resolve with the dominance of a standard
among others, but the rival technologies may coexists under certain conditions [17,
18]. The coexistence of technologies changes the linear and systematic course of the
technology life cycle and it is generated when different competing technologies
occur simultaneously in the same market, without exclude each other. According to
the literature, the technology complexity, regulatory regimes and factors connected
with the intermediate and final markets demand [18], influence the interaction
among competing technologies, preventing the emergence of a clear winner or the
exit of losers [17]. When such dynamics exist, the distinct features create product
niches and consumer communities, gateway technologies, multi-channel end sys-
tems, appropriability regime and persistency. In particular, a niche is defined as
containing one consumer group or “class”: since each class has a distinct preference
set (e.g. a particular point in quality/price space), the number of potential market
niches is determined by the number of consumer classes that are initialized by the
modeler. It has been observed that the survival of the new technology requires the
establishment of a protected space in which further development can be achieved
[19]. This can take the form of distinct niche or sub-niche in the market, which may
be complementary to the established technology, or else take the form of public
sector support, where users are often also contributors to the R&D process. The
protection afforded by its niche has enabled the technology to be further developed
and improved [20]. A practical case is given by different types of flash memory card
[21]. The coexistence thus is highly probable in any case of similarity between
technologies. While the coexistence manifestation and duration is obviously dif-
ferent depending on the type of technology and on whether intervening factors,
surely each of these factors can individually or simultaneously affect the duration of
the competition between technologies, and determine the presence within the same
market. In such situations, the creative destruction does not seem to be the rule. It is
possible to assume a kind of “creative persistence” and a coexistence of different
technological solutions [18]. Another situation that moves away from the linearity
of the technology cycle is the re-emergence case, which occurs when a technology
fails at one time period, exits the market, but later returns. Following Raffaelli [22],
factors concerned with the re-emergence of a technology are: institutional shaping,
competing alternatives, rate of learning, market characteristics, firm strategic
positioning, key firm networks and firm age and size. Although new or discon-
tinuous technologies tend to displace older ones, technologies can re-emerge,
co-exist with, and even come to dominate newer technologies. This process seems
to be the creation—and re-creation—of product, organization, and community
identities [22].
Technological Cycle and S-Curve … 79
The microprocessor (CPU) is an essential part of any device running and operating
system (personal computer (PC), tablet, smartphone, server and so on). This
industry presents several advantages in studying technological cycles, in particular
[23]: (i) support many design, (ii) there are high switching costs between rival and
incompatible designs, due to hardware/software incompatibilities, (iii) presence of
high network externalities, (iv) high growth in both customers and the number
of competitors (v) the introduction of the International Business Machines
(IBM) PC effectively changed the nature of competition in the personal computer
market by introducing a clear standard architecture. Looking at the evolution of the
market structure, it is inevitable to note how competition evolved, since many prior
competitors were already eliminated by competition [29]. We can say that between
operating system (OS) and CPU there is reciprocal interdependence, that is, the
evolution of one of them influences the evolution of the other(s) [24]. In fact, since
the beginning of PCs diffusion, combination between CPU architecture and OS
played a central role. A practical example can be found in the middle 1970s, when
Zilog Z80 processor and CP/M OS became the dominant CPU and OS combination
of the period circa 1976–1983, and despite the great commercial success of the
Apple II and its OS, Apple was forced to produce a compatible card that allow to
install CP/M OS also in its computer. Simplifying, we can say there are funda-
mentally two architecture designs in microprocessor: RISC (reduced instruction set
computer) and CISC (complex instruction set). The question between them is
longstanding, and there was an important concern in the 1980s and 1990s, when
chip area and processor design were the primary constraints [25]. In the past
decades, the Intel and Advanced Micro Devices Inc (AMD) x86 (CISC CPU) has
dominated desktops and servers markets, while the ARM (RISC CPU) was in the
low-power embedded computing niche [25]. The companies have two different
strategies: while ARM designs and just sells licenses to producers (Mediatek,
NVIDIA, Qualcomm and so on), INTEL and AMD design and produce their own
products. Today, the x86 architecture is arguable the only chip which retains CISC
architecture, though newer Intel’s processors in some ways are hybrid and called
“CRISC”. RISC CPUs were considered superior for many technical points [26].
The emerging of a superior but incompatible technology often exacerbates the
dilemma for incumbents, because the adoption of it can increase the chance of
enhancing the performance of their products, but the incompatibility sharply
reduces customer benefits due to network effects. Intel faced this sort of dilemma in
the early 1990s, when the RISC architecture challenged the CISC technology [27].
The main reason why RISC architecture did not win was the alliance between IBM,
Intel and Microsoft. In the 1981 IBM launched the Personal Computer, with Intel
supplying the microprocessor and Microsoft the OS. As a group, this triad created
the microcomputer format that, within a few years, drove both the Apple II and the
previously dominant CP/M OS to the periphery of the market. Later, this IBM PC
constellation slowly fell apart, but Microsoft and Intel went on to develop the
80 G. Ennas et al.
powerful “Wintel” alliance, which established the dominant industry standard [28].
IBM would not purchase a device unless it was made by at least two companies, so
they would contract with other manufacturers to make their design. Having other
companies manufactured this design, or compatible parts, also increased the market
share of that architecture. In the 1976 AMD and Intel signed a cross-license
agreement, and for years AMD made and licensed most everything Intel made;
AMD also licensed various peripheral chips to Intel. By 1985, the Intel micro-
processor was embodied in the majority of personal computers shipped (55 % or
175 out of 277 firms shipping personal computers used an Intel microprocessor)
[23]. Notwithstanding, in the 1987 the cross-licensing agreement between AMD
and Intel terminated, a standard was established and the rival architecture was cut
off from the PC and server markets. History and literature teach us that, when
industries are characterized by network externalities, the installed base technology
and the availability of complementary goods will play major roles in user adoption.
An insufficient installed base or lack of complementary goods may result in tech-
nology lockout [29]. As we have seen above, the reason why CISC processor has
won is not due to a technical supremacy over RISC but, as happened in the previous
examples (VHS vs. Betamax and QWERTY keyboard), to a series of factors. In
particular, the agreement between Intel, Microsoft and IBM with its commercial
capacity drove the RISC architecture on the periphery, especially in the embedded
systems. Again, in ICT industries network externalities are more pervasive than in
others [30]. Network externalities are “the value or effect that users obtain from a
product or service will bring about more values to consumers with the increase of
users, complementary product, or service’’ [11]; in particular, indirect network
externalities exist “when the utility of a product increases with the greater avail-
ability of compatible complementary products” [12]. For instance, the value of a PC
is influenced by the level and the variety of the supply of applications that is
possible to utilize with it. From this statement we can easily understand why once a
combination between OS and CPU architecture is established it generates high
switching costs and then lock-in, because semiconductor manufacturers tend to
produce unique and incompatible designs. Both PC software and drivers for
peripherals must be designed around the microprocessor, and switching to another
one can be extremely costly; it might involve extensive redesign of the product, or a
total washout of costs incurred in the development of customized software [31].
Switching costs also go well beyond the product changes to include the costs
associated with coordinating a product component change within the organization
as well as between suppliers and customers. A firm attempting to modify a design
will face costs due to modifying documentation, increased communication between
marketing, engineering and production, obsolete inventory, and the lost time of key
personnel which need to deal with the unknowns associated with quality and
performance variations in their product [23]. In addition, the manufacturer must
undertake search costs (both money and time, involving in some cases both sup-
pliers and buyers), set up new external relationships, and face uncertainties in input
quality [32].
Technological Cycle and S-Curve … 81
As seen above CISC and RISC architecture have coexisted for decades, the first one
in the PC and server markets, the second one in the embedded systems. In this
paragraph we want to investigate if the advent and the rise of new products can
change the technology adoption in the CPU market. Over the last years, the mobile
phone has evolved from a device for making calls to one that has become the
central point of access to our digital lives. It offers more advanced computing
abilities and connectivity, allows users to install and run various applications based
on a specific platform, as it runs complete OS, providing a platform for application
developers. These advanced mobile devices possess powerful processors, abundant
memory, larger multi-touch screen and a virtual keyboard with e-mail, web
browsing and wifi connectivity.1 The tablet or tablet PC, is a portable computer that
uses a touch screen as its primary input device. It is slightly smaller and weigh less
than the average laptop, and it integrates the benefits of a PC offering the conve-
nience of a mobile device. It had its rise with the launch of Apple’s iPad in 2010,
and now the sudden rush of devices flooding the market is a proof of their
increasing popularity. According to Gartner, in the 2013 about 195 millions of
tablets were sold. There is a symmetry between PC and tablet/smartphone indus-
tries, in fact there are low buyer switching costs between models that embody a
similar product design (e.g. different brands with the same OS), but there are high
buyer switching costs between rival product designs (e.g. different OS or CPU
architecture or both).Therefore, it is clear the presence of network externalities: the
benefit to own a device depends also on its diffusion and installed users base, and
on the amount of complementary goods, in particular software available. The point
is that, as regards to the dominant design, a clear one is emerging, in fact the
ARM-based CPUs have achieved a more than 95 % penetration of mobile handsets
[33]. In considering these premises, to try to answer our research question we have
analyzed the ARM annual reports and accounts (2012–2013) [33], the Intel [34] and
AMD [35] form 10-k (2012–2013). The USA 10-k form requires to indicate
business information in the Item 1, in particular to “include recent events, com-
petition, regulations, and labor issues”.
We have particularly checked:
(1) If incumbents (in the desktop and server markets)—Intel and AMD—rec-
ognize ARM as a challenge in their core business.
(2) If new entrants—ARM—recognize the opportunity to enter other markets.
Findings:
(1) Intel states that “new competitors are joining traditional competitors in their
core PC and server business areas, where they are leading provider, while they face
incumbent competitors in adjacent market segments they are pursuing, such as
smartphones and tablets”. Intel competitors include: AMD, IBM, Oracle
Corporation, as well as ARM architecture licensees from ARM Limited, such as
1
Source PC Magazine.
82 G. Ennas et al.
5 Discussion
leader strategies, than the incumbent one and finally the OS maker one. As seen
above, Intel is the leader in the desktop and server industries, and to keep its
supremacy it has decided to exploit its technology; in fact, it “is innovating around
energy-efficient performance”, and it is “accelerating the process technology
development for its Intel Atom processor product line to deliver increased battery
life, performance, and feature integration”. Intel recognizes to be a relatively new
entrant in the tablet market segment, and it is trying to offer optimized architecture
solutions for multiple operating systems and application ecosystems. It also rec-
ognizes that boundaries between the various segments are changing, as the industry
evolves and new segments emerge. Conversely, AMD has ever had a smaller
market share in the desktop and server markets, thus, it has decided to adopt an
ambidextrous strategy. With this strategy AMD is trying to be able to both explore
into new spaces as well as exploit their existing capabilities [37]. In fact, AMD is
differentiating its strategies by licensing ARM, in addition to its x86 processors.
Software makers have to be able to manage that innovation, in fact, Microsoft, as a
leader in the desktop and notebook OS markets, has recognized the threat of new
devices. In particular it declares that (form 10-k 2013) [38] its system faces com-
petition from various commercial software products and from alternative platforms
and devices, mainly from Apple and Google. Consequently, it has adapted its
strategy, releasing Windows 8, the first version of the Windows operating system
that supports both x86 (CISC) and ARM (RISC) chip architectures. Conversely,
software developed for the Android OS may run in every architecture because,
simplifying, just like java, it uses a virtual machine to run software [39].
Considering these premises, we think with reasonable evidences that the S-curve
follows a different trend in this market, and almost three decades after the alliance
between Intel and Microsoft that drove RISC processor out of PC and server
markets and signed the emerging of the dominant paradigm, the challenge is
reopened: the first phase has been the affirmation of CISC technology, followed by
a long period of incremental improvement; meanwhile, the RISC technology gained
lower adoption, up to the advent of smartphone and tablet, that caused a rapid rise
of RISC architecture. We can assume therefore that the S-curve might follow the
trend proposed in figure B, that is different from the common interpretation figure
A, which considers that, once a technology prevails, keeps its supremacy until a
new disruptive technology enter and defeats the market. Indeed, in the CPU
industry two technologies have coexisted, the CISC dominating the market and the
RISC relegated to the embedded segment, but with the advent of new devices
(tablets and smartphones) the adoption of RISC systems is experiencing a rapid
growth with a sudden change in the curve concavity. According to the analysis
presented above, they are currently facing a “new era of ferment”, and basically
three future scenarios can be envisaged:
(1) The CISC technology maintains its supremacy and follows the trend
described by the yellow curve, while the RISC one follows a lower trend, described
by the green curve.
84 G. Ennas et al.
(2) The RISC technology imposes its own standard in the market segments
currently dominated by CISC, and follows the trend described by the blue curve,
while the CISC one proceed along the lower trend described by the red curve.
(3) Both technologies coexists in different market segments, without exclude
each other.
Regardless of how things go actually, it is clear that this trend of S-curves is very
different from what we know.
6 Implications
The findings of this study have several implications for managerial practice and
technology, organization and strategy. Although the analysis of these implications
is crucial from a strategic point of view, it goes beyond the aim of this paper, hence
we shortly indicate them. First of all, we have to consider that the processor market
generates a turnover of around 300 billion dollars, and this type of trend is moving
earnings from a technology to another. Secondly, devices equipped with a CPU are
complex systems, therefore implications will affect the software and, in particular,
Technological Cycle and S-Curve … 85
operating systems; hence, the implications we stated above are valid also for the
software. Thirdly, firms making technology investment decisions need to com-
pletely understand the competing technologies dynamics, because the emergence of
an alternative and potentially superior technology does not necessarily mean the
failure of the incumbent, because different scenario would be traced. Fourthly, firms
also need to look inward to identify competencies they need to ensure they have the
absorptive capacity to adopt new technologies and respond quickly to technological
changes. Fifthly, strategic alliances between hardware and software makers may
lead to as happened before (i.e. Wintel alliance). Finally, indirect network exter-
nalities may play a crucial role, because the amount of complementary products and
services available can strongly contribute to the affirmation of a technology over
another.
7 Conclusions
In this paper we have analyzed the technological cycle, with the goal of under-
standing if the battle for dominance between two rival technologies can be reopened
with a new era of ferment. We have explored the CPU market finding that the era of
ferment may be restarted between different technologies also after a long period of
time, and technologies competing in distinct segments race each other. These
results suggest that the S-curve may have a different trend and propose a non-
conventional view of the technology adoption process. This paper presents several
limitations, in particular debates on in doing events and maybe the amount of
available data is not enough to delineate a clear scenario. However, we believe that
in addition to these preliminary considerations, this research has thrown up many
questions regarding the technology diffusion in need of further investigation.
Although we have evidence from the microprocessor market, the insights of this
study should be confirmed in other context to extend, generalize and eventually
improve technological cycle literature. If it is true that not even the best technology
wins, we have shown that this could be a dynamic position and the era of ferment
may be re-opened.
References
1. Schumpeter, J.A.: Socialism, capitalism and democracy. Harper and Brothers (1942)
2. Tushman, M.L., Rosenkopf, L.: Organizational determinants of technological change: towards
a sociology of technological evolution. Res. Organ. Behav. 14, 311–347 (1992)
86 G. Ennas et al.
3. Utterback, J.M.: Mastering the dynamics of innovation: how companies can seize. Harvard
Business School Press, Boston (1994)
4. Schilling, M.: Technology success and failure in winner-take-all markets: the impact of
learning orientation, timing, and network externalities. Acad. Manag. J. 45(2), 387–398 (2002)
5. Adner, R., Zemsky, P.: Disruptive technologies and the emergence of competition. Soc. Sci.
Res. Netw. (2003)
6. Taylor, M., Taylor, A.: The technology life cycle: conceptualization and managerial
implications. Int. J. Prod. Econ. 140(1), 541–553 (2012)
7. Anderson, P., Tushman, M.L.: Technological discontinuities and dominant designs: a cyclical
model of technological change. Adm. Sci Q 604–633 (1990)
8. Pinch, T.J., Bijker, W.: The social construction of facts and artifacts. Technol. Soc. 107 (1987)
9. Schumpeter, J.A.: The theory of economic development: an inquiry into profits, capital, credit,
interest, and the business cycle, vol. 55. Transaction Publishers (1934)
10. Murmann, J.P., Frenken, K.: Toward a systematic framework for research on dominant
designs, technological innovations, and industrial change. Res. Policy 35(7), 925–952 (2006)
11. Katz, M.L., Shapiro, C.: Network externalities, competition and compatibility. Am. Econ.
Rev. 75, 424–440 (1985)
12. Basu, A., Mazumdar, T., Raj, S.P.: Indirect network externality effects on product attribute.
Market. Sci. 22–2, 209–221 (2003)
13. Suarez, F.F.: Battles for technological dominance: an integrative framework. Res. Policy 33,
271–286 (2004)
14. Besen S.M., Farrell J.: Choosing how to compete: Strategies and tactics in standardization.
J. Econ. Perspect. 8(2), 117–131 (1994)
15. David P.A.: Clio and the economics of QWERTY. Am. Econ. Rev. 75, 332–337 (1985)
16. Foster, R.N.: Innovation: the attacker’s advantage, vol. 152. Summit Books, New York (1986)
17. Nair, A., Ahlstrom, D.: Delayed creative destruction and the coexistence of technologies.
J. Eng. Tech. Manage. 20(4), 345–365 (2003)
18. Galvagno, M., Faraci, R.: La coesistenza fra tecnologie: definizione ed elementi costitutivi.
Sinergie rivista di studi e ricerche, pp. 64–65 (2011)
19. Rosenberg, N.: Inside the black box: technology and economics. Cambridge University Press,
Cambridge (1983)
20. Windrum, P., Birchenhall, C.: Structural change in the presence of network externalities: a
co-evolutionary model of technological successions. J. Evol. Econ. 15(2), 123–148 (2005)
21. De Vries H.J., de Ruijter, J.P.M., Argam, N.: Dominant design or multiple designs: the flash
memory card case. Technol. Anal. Strateg. Manag. 23(3), 249–262 (2011)
22. Raffaelli, R.: Mechanisms of technology re-emergence and identity change in a mature field:
Swiss watchmaking. In: Academy of Management Proceedings, vol. 2013, No. 1, p. 13784
(2013)
23. Tegarden, L., Hatfield, D., Echols, A.: Doomed from the start: What is the value of selecting a
future dominant design? Strateg. Manag. J. 20, 495–518 (1999)
24. Thompson, J.D.: Organizations in action. McGraw-Hill (1967)
25. Blem, E., Menon, J., Sankaralingam, K.: Power struggles: revisiting the RISC vs. CISC debate
on contemporary ARM and x86 architectures. Appears in the 19th IEEE International
Symposium on High Performance Computer Architecture HPCA (2013)
26. Krad, H., Al-Taie, A.Y.: A new trend for CISC and RISC architectures. Asian J. Inform.
Technol. 6(11), 1125–1131 (2007)
27. Lee, J., Lee, J., Lee, H.: Exploration and exploitation in the presence of network externalities.
Manag. Sci. 49(4), 553–570 (2003)
28. Gomes-Casseres, B.: Competitive advantage in alliance constellations. Strateg. Organ. 1(3),
327–335 (2003)
29. Semmler, A.: Competition in the microprocessor market: intel, AMD and beyond. University
of Teier, pp. 1–7 (2010)
30. Lin, C.-P., Tsai, Y. H., Wang, Y-J., Chiu, C.-K.: Modeling IT relationship quality and its
determinants: a potential perspective of network externalities in e-service, p. 2. Elsevier (2010)
Technological Cycle and S-Curve … 87
31. Choi, J.P.: Irreversible choice of uncertain technologies with network externalities.
Department of Economics, Columbia University (1992)
32. Garud, R., Kumaraswamy, A.: Changing competitive dynamics in network industries: an
exploration of Sun Microsystems’ open system strategy. Strateg. Mang. J.
33. http://ir.arm.com/phoenix.zhtml?c=197211&p=irol-reportsannual. Accessed June 2014
34. http://www.intc.com/annuals.cfm. Accessed June 2014
35. http://ir.amd.com/phoenix.zhtml?c=74093&p=irol-reportsannual. Accessed June 2014
36. Christensen, C.M.: The innovator’s dilemma: when new technologies cause great firms to fail.
Harvard Business Press (1997)
37. O’Reilly III, C.A., Tushman, M.L.: Ambidexterity as a dynamic capability: resolving the
innovator’s dilemma. Res. Organ. Behav. 28, 185–206 (2008)
38. http://www.sec.gov/Archives/edgar/data/789019/000119312513310206/d527745d10k.htm
39. Ehringer, D.: The dalvik virtual machine architecture. Tech. Rep. (2010)
The IS Heritage and the Legacy of Ciborra
Abstract Ten years is a good distance at which to assess Claudio Ciborra’s legacy
to Information Systems Studies and Organizational Studies. The paper compares
the scholar’s seminal work, The Labyrinths of Information, with the thematic
papers published in 30 special issues/sections of four top IS journals. The results
show clearly that Ciborra’s concepts have now gained much wider currency,
especially in the study of phenomena such as local meaningful practices (e.g.
bricolage, improvisation, cultivation). They contribute to the swing toward a more
praxis-oriented attitude in the IS discipline.
1 Introduction
2015 marks the 10th anniversary of Claudio Ciborra’s death. The scholar’s work is
well known to the authors, one of whom had the honour of collaborating with him
personally and the paper pays tribute to his memory by revisiting the conceptual
pillars on which he built his research. The distinction between entitative conceptions
and process conceptions contributes to clarify his inquiries [1]. Highlighting enti-
tative aspects of a phenomenon means to focus on general principles such as
abstractions and representations to be applied across different situations. On the
contrary, process aspects of a phenomenon focus on the emergent, contingent, and
locally specific reality. Ciborra, all along his research activity, adopted a strong
process-oriented worldview. He underscored the shortcomings of entitative con-
ceptions when investigating organizations and information systems as
social-technical phenomena which are continuously evolving, subject as they are to
minor or major changes. The aim is to discover to what extent IS scholars have
incorporated the key tenets of Ciborra’s original thinking into their research agendas
since the publication in 2002 of his seminal work The Labyrinths of Information:
Challenging the Wisdom of Systems, herein shortened to Labyrinths [2].
A review of the contemporary and later literature shows that the process-oriented
view taken by Claudio Ciborra placed him firmly in the minority 10 years ago. That
there was indeed a dominant paradigm was concluded by Orlikowski and Baroudi in
1991 after analyzing 155 articles that appeared in four of North America’s leading
journals from 1983–1988 and finding that 96.8 % were underpinned by positivistic
epistemology [3] which usually entails an entitative ontology. The positivist infor-
mation systems research approach was defined by the authors as follows:
Ontologically, positivist information systems researchers assume an objective physical and
social world that exists independent of humans, and whose nature can be relatively un-
problematically apprehended, characterized, and measured. (ib., p. 9)
That that approach still predominated ten years later was confirmed by the
survey conducted by Chen and Hirschheim [4], who found that 81 % (86 % in the
US journals) of the 1893 articles published in eight European and North American
journals from 1991–2001 had a positivist leaning. Nevertheless, a tremor of change
was observed over the next decade and, according to Paucar-Caceres and Wright
[5], based on the results of a literature review of six journals issued from 1999 to
2009: “Information Systems Research is moving away from the
normative/positivistic paradigm associated with ‘hard-oriented’ methodologies. We
identified a total of 145 articles drawing on interpretative, critical and constructivist
methodological approaches” (ib., p. 598). This indicates that Ciborra’s writings
arrived at precisely the moment when a larger number of IS scholars started to
follow a process-oriented worldview.
The aim of the paper is to help write the history of IS theoretical literature and thus
contribute to the IS discipline’s quest “to articulate and claim a heritage” [6, p. 2].
To respond to the research question alluded to earlier, i.e., to what extent have IS
scholars incorporated the key tenets of Ciborra’s original thinking into their
research agendas since the publication in 2002 of his seminal work Labyrinths, the
authors chose a two-step approach to the hermeneutic circle: first, they read and
analyzed Labyrinths to identify the keywords and main concepts; second, they
examined 30 special issues/sections published by the following four journals from
2004 to date: European Journal of Information Systems (EJIS, 9 issues), Journal of
The IS Heritage and the Legacy of Ciborra 91
Ciborra’s work rests on the pillars of ontology and epistemology so any presen-
tation and assessment of his legacy to Organization Studies (OS) and Information
Systems Studies (ISS) would not be complete without due recognition of these two
aspects. This section therefore will frame his research according to the object of
research (ontology) and the means used (epistemology), and, specifically, their
combined use.
Ontology and epistemology are the essence of research activity. Ontology is the
study of being, of what exists and is thinkable [7]. Epistemology refers to the modes
through which knowledge, related to a specific entity, is acquired. Theoretical
perspectives, methodologies, and methods deal with epistemology or rules followed
in order to gain knowledge that has been validated scientifically.
Individuals and organizations Until the mid-1990s, the ontological perspective
favoured by Ciborra revolved around the fact that both OS and ISS consider
humans as being equipped with bounded rationality [8]. However, the transaction
cost theory [9] argued that humans are not only limited from a rational viewpoint
but can behave opportunistically by adopting what is called strategic rationality. But
an organization cannot create the conditions that promote rational behaviours and
prevent opportunistic behaviours unless they assign equal importance to both
learning and adaptive rationality [10]. Therefore, organizations and information
systems should be seen as tools for enhancing learning and spreading knowledge.
By the late 1990s, Ciborra had significantly changed his approach: individuals
are entities who navigate, discover, and encounter the world relentlessly according
to a mood-affected caring. Besides, understanding is not the result of a cognitive
evaluation of pros and cons in a specific situation but a human attitude in which the
opening to possibilities and continuous caring about events, resources, behaviours,
and problems prevail. Ciborra thus started to draw on phenomenology, mainly the
thinking of Husserl and in particular Heidegger, focusing on two main aspects: the
‘platform organization’ and the ‘information infrastructure’.
92 P. Depaoli et al.
Section 2 has illustrated how Ciborra went on to develop both sides of the coin, i.e.,
what he thought to be real—and therefore relevant (albeit neglected by the main-
stream literature) for both information systems and organization scholars—and the
methodology that could better uncover that relevant reality. Table 1, below, sum-
marizes what we have identified as the main concepts and the keywords of
Labyrinths. Applying the hermeneutic circle, we read and discussed the book
several times in order to deepen our knowledge of Ciborra’s work and its insights
and then searched for consensus on the main tenets of his contributions. The left
column of Table 1 lists the titles of the chapters of Labyrinths while the right
column lists the keywords and the concepts that, in the authors’ opinion, convey the
chapter’s main message. Table 1 is followed by a brief commentary on each chapter
of Labyrinths, including identification of the key words, to lay the ground for the
communication across different languages and cultural modes” (ib., p. 115). The
appropriate care of guest-technology bears rewards in terms of innovation and
learning.
Shih This Chinese war strategy concept refers to the exploitation of the
configuration of the resources at hand. Organizing (the resources at hand) is to build
identity across discontinuities (ib., p. 128) so that strategy, action, and structure
coalesce to cope with surprises (ib., p. 122).
Kairos (and Affectio) Ciborra dedicates an entire chapter to the importance of
improvisation in dealing with unforeseeable events. Improvisation is based on the
ability to intuitively surpass rationality by drawing on the deepest wells of personal
resources: moods and emotions. This leads to decisive moment of vision in which
the most appropriate solution emerges at the most appropriate moment (kairos).
Methodological appendix (Odos) The appendix better illustrates what was
introduced in ‘Invitation’. Ciborra describes the two types of evidence he
encounters when approaching an “organizational phenomenon”: (i) “the set of ideas
and models taken for granted in the domain of organization theories or consulting
models… [which] following Heidegger we can refer to … as illusory appearances”
(ib., p. 176), and (ii) ‘apparitions’ which belong to a space that cannot be filled by
any model and that surface in informal talks that “host the unexpected aspects of
organizational life.” (ib., p. 177). According to Ciborra, investigation often stops at
the empty models instead of working on the apparitions that tell about the
“underlying phenomenon to be unveiled” (ib., p. 178).
The results of the comparative analysis of Ciborra’s work and the articles of the
special issues/sections enabled the authors to identify the following matches.
Emerging challenges The concept of odos (way, road), which Ciborra used to
name the methodological appendix of Labyrinths, was the focus chosen by Sawyer
and Winter for their op-ed to the 2011 JIT “special issue on futures for research on
information systems” [14]. As seen earlier, Ciborra adopted a non-mainstream
method (phenomenology) to draw attention to emerging (and often overlooked)
phenomena. Sawyer and Winter stressed the need to explore different approaches to
shed light on a number of issues that still seem to lead the research 10 years after
Ciborra’s publications. The need to participate in more than one intellectual com-
munity is necessary because of the evolution of current, ubiquitous ICT that are
magnified by present economic, social and political trends. There are grand chal-
lenges to be met, such as “transforming a health-care system from one designed to
treat acute disease to one that improves the lives of those with chronic illnesses” (ib.
p. 97). This is the kind of large project affected by ‘drift’ in which general plans
involving large numbers of actors need to be complemented by the appropriate
96 P. Depaoli et al.
“local” techniques suggested by Ciborra. The final question asked by the editors
refers to Ciborra’s invitation to abandon the restricted spaces of abstract models to
come in closer contact with the ‘lifeworld’: What are the consequences for orga-
nizing information systems that increasingly stimulate people’s curiosity and
creativity? (ib.) Researchers presently investigating the potentialities of ‘virtual’
(synthetic) worlds in organizational terms might be able to provide some answers to
that question [15].
History and Gestell The question of human-non human identity is one of the
topics discussed in a recent special section of the EJIS (January 2014). In fact, one
contribution specifically addresses cyborgian identity, i.e., the role of physical and
virtual bodies in social media [16], in which the way technology is conceived is
decisive: in virtual worlds attention should be turned “to the fluid and contingent
intermingling of humans and technologies” [17, p. 813]. This ‘intermingling’ holds
beyond virtual worlds and is shared by a wide range of social science studies. As
Shultze and Orlikowski underscore, practices are constitutive of social life in fluid
and emergent phenomena (‘performativity’). A view that differs significantly from
the traditional one of a reality that is composed of fixed and independent entities but
which chimes with that of Ciborra’s of an apparent reality made up of abstract
models and poorly explicative generalizations of the continuing ‘ordering’ of
resources (as described in Sect. 2, above). Indeed, the debate on issues close to
Ciborra’s sensibility as a researcher is ongoing.
Moreover, the fact that Ciborra draws on Heidegger and his Gestell concept
shows both his willingness to draw directly on the foundations of western thought
and to give historical depth to his analyses (ICTs do not come out of the blue: they
are born out of the development of ‘calculative thinking’). Ciborra even used a
phenomenological perspective when addressing key aspects in the expansion of
ICTs through the description of the Olivetti case (Labyrinths, Chap. 7): disconti-
nuities and surprises in strategy building and implementation can be fundamental
success factors. This was the method Ciborra used to incorporate IS history: to
propose an emblematic case to highlight relevant (and often overlooked) factors for
present action. Of course, an IS history can be built using other approaches and
methods, as shown in the papers of the two 2013 JIT special issues, in which the
editors point out that there are different ways of “doing IS history” [6]; methods
comprise case studies, interviews, and literature search. This kind of study includes
controversies and disputes and sheds light on two aspects: (i) there is no linear,
mechanistic development of IS; and (ii) there is no conclusively settled IS history
and heritage. Interestingly, the editors draw the reader’s attention to Michel
Foucault’s work and his findings of discontinuities in history. Through them we
learn how to deal with alterity, with the unexpected and the minute deviations
which Ciborra often underscored as key elements in large IS projects, as pointed out
in the previous section. The differences between Heidegger and Foucault should not
make our likening of Ciborra to Foucault seem surreptitious: the two philosophers
are linked by strong convergences. In fact, at the end of his essay Being and Power
Revisited, Hubert Dreyfus refers to the last works of both Heidegger and Foucault,
saying:
The IS Heritage and the Legacy of Ciborra 97
… when one is looking for marginal practices that could support resistance to a dominant
epoch of the understanding of being or a dominant regime of power…, one should think of
the marginal as what resists any unified style of being and power. One will seek to preserve
not new forms of being or power, but local things and individual selves [18, p. 49].
Implementation and drift In 2005 a special issue of JSIS (n. 2) looked at how
enterprise systems are affected by (and affect) individuals, groups and organiza-
tions. The results of one contribution’s case study [19] show that the interactions of
power structures (i.e. political and structural forces), the technology affordances,
and the intentions of management produce cycles of ‘control and drift’ during the
implementation of an ERP program. In other words, the intentions of designers and
managers produce both original development plans and successive revisions and
rescheduling (even the abandonment of certain plans) according to the emerging
limitations (or accommodations) of both technology (e.g., legacy systems) and
cultural or political settings (e.g., the evolution of power balances between senior
corporate managers versus national managers). The case study’s key findings
diverge substantially from the tenets of the studies based on critical success factors:
specific influences were not fixed but varied during implementation and forced
changes along the way. The authors conclude that technology is thus neither a
‘black box’ nor a mere supplement to the social structure-agency relationship, but
an agglomeration of affordances open to social interaction. The consequences for
practice are to give room to intuitive action and to improvise when situations are
new and destructured and to allow rational planning in well-established organiza-
tional processes. Indeed, Ciborra, cited by the authors, insisted on the concept of
drift and on the hiatus between the theory and practice of systems development and
use, and, of course, on the need to adopt tactics and learning-by-doing more than
formalized plans.
Strategizing and designing-in-action Ciborra uses the drifting phenomenon in
Krisis, the second chapter of Labyrinths, to support his critical stance on the issue
of strategic alignment between business organizations and ICT. Once again, the
scholar pointed out that organizations are complex relational and continuously
evolving systems interconnected with a ‘drifting’ information infrastructure. It is
therefore unlikely, as many business cases have shown, that management models
and methods (used for strategic alignment) have the capacity to deal with the real
world, built as they are on the natural sciences paradigm. These models take the
concepts of strategy and technology for granted instead of seeing them as prob-
lematic and adopting more realistic and practical approaches.
Ten years after the publication of Labyrinths, research moved away from
alignment in search of where strategic IT leadership is located in a modern cor-
poration. Let us see why. In June 2012 the Journal of Strategic Information Systems
(JSIS) celebrated its 20th anniversary with a reflection on the IS discipline and,
specifically, the link between IS and business strategies [20]. One of the contri-
butions [21], inspired by the study of the development of the Boeing 787 aircraft,
provided the opportunity to radically reconsider the role of strategic information
systems (SIS): “…during the early decades of the 21st century [IT investment has
shifted] toward an IT-enabled global network organization structure” (ib., p. 91).
98 P. Depaoli et al.
The fact that now IT is ‘everywhere’ and that IT leadership is ‘nowhere’ gives the
scenario a brand new complexion. The concept of business architecture comes to
the fore because ubiquitous IT enables and facilitates the establishment of strate-
gies, operations, and networks that cross traditional firm boundaries. Specifically:
the role of IT in corporations has shifted from supporting and being aligned with business
strategies to being an integral part of business strategies. As shown by the Boeing case,
strategic IT can’t be simply functionalized and positioned into traditional twentieth century
organization structures; IT now enables the emerging global network structures allowing
breakthrough products for breakthrough economics (ib., p. 101).
The swing toward a more praxis-oriented attitude in the IS discipline (as desired by
Ciborra) was the focus of another recent JSIS special issue, “Information systems
strategy as practice: micro strategy and strategizing for IS” [22], which makes a
detailed investigation of the subject of IS strategizing. The idea is to consider IS
strategizing as a practice based on a theoretical framework in which strategy praxis,
strategy practices, and strategy practitioners constitute its main elements [23]. This
literature originates in the managerial studies and adds another building block to the
debate between ‘strategy process’ and ‘strategy contents’. Strategy contents cluster
classical approaches such as the resource-based view of the firm [24] or the concept of
dynamic capabilities [25]. Strategy process focuses on how the steps to be followed
for strategic positioning and performance (strategy contents) should be put into
practice, taking account of the influence of internal politics, organizational culture,
and leadership styles [25]. Specifically, IS strategizing or micro-strategizing consists
of both deliberate and emergent patterns of actions where the role of organizational
sub-communities is considered particularly important [26]. Sub-communities, in fact,
are defined as groups of actors who share interests in particular domains of activity
contributing to the emergent strategy realization and collaborating with the wider
organizational community. In this context, the role of information systems can
become relevant because they can both mediate goal-oriented individuals and col-
laborative activities and lead, eventually, to practices generated by repeated patterns
in daily organizational work (technology-mediated practices).
Interestingly, Ciborra broached the notion of designing-in-action in Bricolage,
the third chapter of Labyrinths, to support the search for new strategic systems
[2, pp. 44–47]. This notion and practice is not too different from strategizing and is
still a valid contribution to the ‘strategy contents’ and ‘strategy process’ debate. In
fact, two main routes lead to innovation and competitive advantage: competence
cultivation (bricolage) and radical learning. Competence cultivation consists of
relying on local information and existing routines to gradually cope with new tasks
through learning-by-doing, incremental decision-making, and muddling through.
On the other hand, in radical learning both cognitive and organizational structures
are restructured by intentionally challenging and breaking down established rou-
tines, frames, and institutional arrangements. In both routes, the context is
restructured-in-action, design-in-action takes place and “new strategic information
and information systems will be generated, based on the unique, emerging world
The IS Heritage and the Legacy of Ciborra 99
view the designers and users are able to adopt” (ib. p. 46). So the competitive
advantage is actually triggered by the difficulties of the competitors to reproduce a
unique setting.
Development, sustainability, and democratization The strategic importance of
local knowledge and practice (which, as we have just seen, was underscored by
Ciborra especially for innovative organizations) was highlighted in the special issue
of MIS Quarterly on “IS in Developing countries” (2007). The guest editors
summarize the results of a group of studies (which they call ‘Local Adaptation and
Cultivation of IS’): “This body of literature opposes the naïve idea that global-
ization is synonymous with cultural homogeneity and reasserts the crucial impor-
tance of understanding and valuing locally meaningful practices” [27, p. 320]. Two
of the special issue papers concord with this line of thinking. Puri [28] examines a
case in India where local knowledge is used to complement scientific knowledge in
a locally designed GIS database. In a process of ‘participatory mapping’ the deep
understanding of the communities about resources (land, water, vegetation)
enabled, for example, the design and mapping of the traditional water-harvesting
structures and, consequently, of the appropriate location for developmental inter-
ventions. Silva and Hirschheim [29] investigate the development of a hospital
information system in Guatemala. The participatory (and decentralized) approach
adopted generated enthusiasm in the formerly skeptical hospital personnel and
persuaded them to share know-how and concerns. When elections changed the
administrative authorities, the project was brought to a halt and institutionalization
stopped since the new government decided to resort to packages provided by aid
agencies. On this issue, some participants told the researchers explicitly that the
administrative system developed was “unique and [couldn’t] be replaced by a
packaged program” (ib., p. 343). In addition to this awareness, one of the relevant
findings (and contributions to IS literature) mentioned by the authors suggests that
the development of such strategic information systems (SIS) “can affect not only
processes and mechanisms of production and control but also can affect values and
beliefs. This is highly relevant as most SIS literature concentrates on processes and
competition with little emphasis on values, beliefs, and emotions” (ib., p. 350). As
noted earlier, building on both uniqueness of practical expertise and soft aspects
(such as emotions) led Ciborra to consider these as critical factors.
The notion of sustainability does not concern only institutionalization, as in the
case of the Guatemala project where discontinuities in the country’s political
leadership prevented it, but is used to address a vast array of issues that concern the
environment and the role of IS. MIS Quarterly dedicated a special issue,
Information Systems and Environmental Sustainability, to these topics in December
2013. The guest editors’ introductory paper [30] highlights two aspects close to
Ciborra’s view. First, just as Ciborra called for a new vision and approach to
research and for a higher consideration of marginal practices (Bricolage), the guest
editors also call for innovation within the academic community to give voice to the
emerging field of IS: “..researchers must not only work on the actual design of
future IS but also establish the ‘in-field’ impact of such systems… When con-
ventional approaches fail, organizations often implement solutions that loosen the
100 P. Depaoli et al.
old shackles to enable the pursuit of new goals… We propose that MIS Quarterly
establish a new territory charged with promoting and publishing impactful green IS
research.” (ib., p. 1270). This quotation reminds the reader of one of Ciborra’s
suggestions (almost an oxymoron) to bolster incremental learning: “Establish sys-
tematic serendipity” [2, p. 51]. Second, Ciborra’s perception of the Internet was that
of a flexible infrastructure that emerged outside any strategic master plan and that
allows people to share knowledge in ways not even imagined by the textbooks
(ib. p. 13). Ten years later, Malhotra and co-authors see a powerful way for
advancing environmental sustainability [30, p. 1271] in the combination of the
‘Internet of people’ (which has changed the nature of communication between
people and organizations) with the ‘Internet of objects’ (ubiquitous networks
interconnected with sensors and sensitized objects).
5 Concluding Remarks
A fuller picture of how Ciborra’s work has been incorporated by the IS discipline
could have been drawn from the analysis of both the 693 citations of Labyrinths
(according to Google Scholar) and the complete set of special issues produced by
the AIS basket journals. Yet, the results of the preceding section have provided
sufficient evidence for the formulation of solid preliminary conclusions concerning
the importance of a process-oriented worldview in IS and organization studies.
First, the technology and, specifically, the IS debate is far from being resolved:
emerging grand challenges (e.g. sustainability) need to be addressed and scholars
are anchoring their work to increasingly explicit (and varied) ontological and
epistemological roots: Ciborra’s later work went the whole mile as he drew on
phenomenology to develop his research tenets on a range of issues. Second, the
Husserlian life-world seems to have become an inevitable trail for IS researchers to
follow, given that they now rank moods, feelings, and emotions as key factors in
gaining insights into the encounter (the intermingling, according to sociomaterial
literature) of human and non-human entities. Third, local practical expertise—in
which Ciborra was greatly interested thanks to its generative capability of
innovation—is now considered a key determinant not only for IS programs in
developing countries but also for transforming strategic IS ‘alignment’ into IS
‘strategizing’; organizational sub-communities of actors produce technology med-
iated practices that are an integral part of the strategizing process. Fourth,
designing-in-action and bricolage are seen increasingly as the best ways to respond
to the drifting of projects from original plans. In fact, IT shared leadership leverages
technology affordances to enable a decentralized negotiation between the political
and structural forces and the management objectives.
Ten years ago Ciborra’s original thinking led him to build his research according
to an ontologically and epistemologically coherent vision. That vision was some-
what undervalued by his mainstream contemporaries but not by the IS researchers
of today, to whom his key findings are still relevant.
The IS Heritage and the Legacy of Ciborra 101
References
25. Teece, D.J., Pisano, G., Shuen, A.: Dynamic capabilities and strategic management. Strateg.
Manag. J. 18(7), 509–533 (2008)
26. Henfridsson, O., Lind, M.: Information systems strategizing, organizational sub-communities,
and the emergence of a sustainability strategy. J. Strateg. Inf. Syst. 23(1), 11–28 (2014)
27. Walsham, G., Robey, D., Sahay, S.: Foreword: Special issue on information systems in
developing countries. MIS Q. 31(2), 317–326 (2007)
28. Puri, S.K.: Integrating scientific with indigenous knowledge: constructing knowledge alliances
for land management in india. MIS Q. 31(2), 355–379 (2007)
29. Silva, L., Hirschheim, R.: Fighting against windmills: strategic information systems and
organizational deep structures. MIS Q. 31(2), 327–354 (2007)
30. Malhotra, A., Melville, N.P., Watson, R.T.: Spurring impactful research on information
systems for environmental sustainability. MIS Q. 37(4), 1265–1274 (2013)
Collective Awareness Platform
for Sustainability and Social Innovation
(CAPS)
Understanding Them and Analysing Their
Impacts
Abstract The paper describes the Collective Awareness Platform for Sustainability
and Social Innovation domain (CAPS) by using an “inside” perspective, as it is
based on the research work of a CAPS project entitled IA4SI—Impact Assessment
for Social Innovation. The paper first defines Digital Social Innovation as the
technological enabled version of Social Innovation and describes CAPS projects
consequently. Then, it presents the framework of the quanti-qualitative methodol-
ogy developed by the IASI project for analysing the impact of CAPS projects. It
considers four main areas of impact: social, economic, political and environmental.
Each aspect is then articulated in several sub-categories required in order to map a
multi-dimensional and internally diversified domain such as CAPS.
Keywords CAPS Digital social innovation Social innovation Socio-economic
impact Political and environmental impact assessment European projects
EU-funded research Methodology
The acronym CAPS, stands for Collective Awareness Platforms for Sustainability
and Social Innovation. The European Commission (EC) used this acronym for the
first time in 2012, in the context of the Seventh Framework Programme of research.
It served for identifying a new group of research projects and, to a certain extent, a
new research area.
The European Commission defines CAPS as follows:
“The Collective Awareness Platforms for Sustainability and Social Innovation
(CAPS) are ICT systems leveraging the emerging ‘network effect’ by combining
open online social media, distributed knowledge creation and data from real
environments (‘Internet of Things’) in order to create awareness of problems and
possible solutions requesting collective efforts, enabling new forms of social
innovation.
The Collective Awareness Platforms are expected to support environmentally
aware, grassroots processes and practices to share knowledge, to achieve changes in
lifestyle, production and consumption patterns, and to set up more participatory
democratic processes. Although there is consensus about the global span of the
sustainability problems that are affecting our current society, including the eco-
nomic models and the environment, there is little awareness of the role that each
and every one of us can play to ease such problems, in a grassroots manner”.1
With a first, dedicated call (Call10 of FP7—objective 5.5 of Work Programme
2013), the European Commission invested 19 million of Euros into 12 projects and
500.000 Euros for a Study on “Social Innovation in the Digital Agenda”. Other
three projects—funded under other programmes—were added to this domain as
well, because their research activity is very relevant for CAPS. As a result, the
programme can be said to consist of 15 on-going projects in this area.
These are seven Research Projects for Grass Roots Experiments and Pilots, four
support actions (including IA4SI) and one project dedicated to the management of a
seed fund for social innovation activities. The CAPS domain is included in Horizon
2020 programme with an investment of €36 millions Euro for the periods 2014–
2015. As it will be described in the next chapters, the expectation is that the IA4SI
methodology can be used for future CAPS projects and beyond, for Digital Social
Innovation projects in general.
Collaborative Awareness Platforms can be seen as ICT-supported collaborations
of human and non-human actors which enable and facilitate the production, sharing
and sense-making of information gathered through citizen engagement and through
sensors and the like [1].
1
http://ec.europa.eu/digital-agenda/en/collective-awareness-platforms-sustainability-and-social-
innovation.
Collective Awareness Platform for Sustainability … 105
The term social innovation is composed of two words: “Social” and “Innovation”.
Both terms are largely used in everyday language and are often taken for granted
when, in fact, they are difficult to define in a non-tautological way. It is not trivial to
question the very nature of society [8] or to define the boundaries between what is
social and what is, for example, economical or cultural. Similarly, the literature on
innovation’s political, economical and technological aspects is broad and many
definitions of innovation are available [15]. The first step to accurately define social
innovation is to recognise the seeming ambiguity of the term: its definition may
vary according to the definitions attributed to the concepts “social” and “innova-
tion”. It is useful, therefore, to consider the epistemologies behind the two terms in
the various definitions of social innovation that are currently available, so as to try
to circumscribe the realm of social innovation, and to understand its boundaries.
Moreover, social innovation as a field of study, is rather interdisciplinary, hence,
definitions and understandings are likely influenced by the various authors’
disciplines.
As a starting point for the examination of the term, is the definition proposed by
Murray et al. [13]. The authors define social innovation as “new products, services
or methods that tackle pressing and emerging social issues and, at the same time,
transform social interactions promoting new collaboration and relationships”. In
this definition, the term “social” is used in two ways: it characterises the issues to be
solved (such as adaptation to climate change and the effects of aging population on
society) and the methods used for solving such issues, and which imply a modi-
fication (of some sort) in social relationships. In this definition, social innovation
represents both product and process innovation. It is said to generate a new
product/service by changing, at the same time, the way in which this
product/service is produced. It benefits society ‘twice’, that is, by proposing a
solution to a specific problem and by offering new social links and collaboration
opportunities. The authors do not recognise a specific social category as being the
protagonist of social innovation; the innovator can be a social entrepreneur, a
self-organised local community, an association, a company or a government.
Examples of social innovation can include co-housing, the Grameen bank,
eco-towns and car sharing. In terms of process innovation, the understanding of
social innovation is associated with terms such as participation, engagement,
empowerment, co-design, bottom-up, grassroots initiatives and so forth.
The concept can even be traced back further, dating back to the beginning of
nineteenth century. In his paper, Godin [6] explains that the term social innovation
emerged after the French revolution and, at that time, had both a positive and a
negative connotation. The negative connotation saw social innovation as
106 A. Passani et al.
The concept of social innovation is still nascent and the different forms it can take
have not yet generated a robust way of analysing and measuring its impacts [3, 18].
We can use the lessons learned from this “sector” only in a limited way as IA4SI
is dealing with international, pilot-based projects and not to entrepreneurship or
public driven initiatives [16]. Projects are here interpreted as temporary organiza-
tions ‘to which resources are assigned to undertake a unique, novel and transient
endeavour managing the inherent uncertainty and need for integration in order to
deliver beneficial objectives of change’ [20:7, 9]. A related topic is the localisation
of impacts, especially relevant for digital social innovations, which are expected to
produce benefits in different territorial contexts. It is relevant to look at if and to
what extent, the online tools for social innovation enable transformation at a local
community level and if so, how this happens [20].
Another focal point of investigation is related to the interdisciplinary nature of
social innovation and what it can mean, or achieve in terms of collaboration among
different stakeholders. Social innovation initiatives can serve as a testing ground for
new collaborative processes and for instruments fostering such collaborations.
IA4SI project wishes to contribute to the debate in the field by analysing the first
15 CAPS projects, their objectives, outputs and impacts.
Concluding this section, we can operationalize CAPS projects by interpreting
them as a sub-category of the wider concept of Digital Social Innovation. They will
serve as the main target of drawing out the IA4SI methodology.
CAPS projects are ICT-enabled pilot initiatives, which address pressing social
issues and sustainability issues by promoting the active participation of European
citizens and/or rely on their capability of proving and sharing information. CAPS
projects are digital social innovation initiatives and as such are expected to propose
innovative solutions which should be more efficient, effective, just and sustainable
that available ones. CAPS initiatives are multidisciplinary in nature and most of
them have a relevant research aspect.
Considering now: the topics covered by on-going CAPS projects, the topic
suggested by the EU in the first call dedicated to CAPS, the categories used by the
Digital Social Innovation projects2 for categorising European initiatives in the field,
and the categorisation of social innovation projects proposed by the Tepsie project
[3]; it is possible to say that CAPS projects focus (or could focus in the future) on
the following topics:
• Energy and environment
• Social inclusion
• Participation and democracy
• Economy: production and consumption
2
www.digialsocial.eu.
108 A. Passani et al.
IA4SI methodology described in this section has been elaborated starting from an
extensive literature review on Social Innovation, Digital Social Innovation, impact
assessment methods for these domains and conceptually close domain such as the
third sector, development-related investments and online communities assessments.
IA4SI build on previous European projects in the field of impact assessment
such as SEQUOIA,4 ERINA+5 and MAXICULTURE.6 Those previous projects
offered important lessons learned that have been incorporate in the IA4SI
methodology.
3
A description of each of the CAPS projects can be found at http://caps2020.eu/about-caps/caps-
ict-workprogramme-2013/ and at https://ec.europa.eu/digital-agenda/en/caps-projects. Most of the
projects started in October 2013 and will least 24 or 30 months.
4
For an overview of the SEQUOIA methodology and results see [16]. The compete methodology
is described in Monacciani, Navarra, Passani, Bellini, 2011 and a practical approach to its usage is
described in [10].
5
The ERINA+ Methodology and related tools is described in Passani and others (2013).
6
The MAXICULTURE methodology is described in Passani, Bellini, Spagnoli, Satolli, Debicki,
Ioannidis, Crombie, 2014.
Collective Awareness Platform for Sustainability … 109
7
Available at http://www.iaia.org/publicdocuments/special-publications/What.%20is%20IA_web.
pdf.
110 A. Passani et al.
impacts it will also investigate the project capability to create new job positions and
to foster employment in general as well as the possible impact in terms of training
and human capital development. The impact of CAPS on academia, their scientific
impact through publications and IPRs development, will be also considered.
Social impact index is articulated in the following 6 sub-categories:
• Impact on community building and empowerment
• Impact on information
• Impact on ways of thinking and behaviours
• Impact on education and human capital
• Impact on science and academia
• Impact on employment
By aggregating indicators that are included in different dimensions and
sub-dimensions, it will be also possible to investigate CAPS impact on Social
Capital and on Social Inclusion: tow dimension that the IA4SI team consider as
extremely relevant in this context.
Under the Political impact dimension the methodology will evaluate CAPS
capability of fostering users participation to civic society organisations, of getting
active for their community and to develop new forms of collaboration. Similarly, it
will consider the impact on users political participation and will evaluate project
capability of influencing policy makers and institutions.
Collective Awareness Platform for Sustainability … 111
The analysis of CAPS projects impacts will take advantage of tow main online tools
developed by the IA4SI project. These tools are: the “Self-assessment toolkit”
(SAT) and the “User Data Gathering Interphase” (UDGI). The first one is dedicated
to CAPS projects coordinators and partners and the second one to CAPS users.
CAPS projects coordinators and partners, by entering information in the SAT will
follow a six-steps process which will lead them to the assessment results.
112 A. Passani et al.
1. First of all, CAPS representatives will describe the inputs of their project
including the budget, the human resources available at project level, the
pre-existing technological and non-technological elements the projects builds
on, etc. As part of this step, project representatives will describe their zero
scenario and the social issues they are addressing.
2. Secondly, they will select their stakeholders and end-users in this way
describing “who” will benefit from the project outputs
3. Thirdly, they will describe their outputs: technological and non-technological
ones such as publications, licences, patents, etc.
4. Then they will select the impact dimensions that are more relevant for them. The
IA4SI methodology is modular so that each project can personalise it. As an
example, a project can select impact on employment and impact on information
as relevant and exclude impact on education and human capital because its
outputs and its activities are not leading to this kind of impacts.
5. At this point the SAT will show all the questions related to the impact
dimensions selected by the project representatives. The data requested are both
qualitative and quantitative.
6. The data inserted by CAPS representatives will be elaborated in real time by the
SAT that will provide them an impact assessment report. In a graphic, easy-to-
understand way, project representatives will be able to visualise their impacts by
comparing their performance with a set of benchmarks. Each project will be able to
see the score obtained on the 8 IA4SI complex indices (social, economic, envi-
ronmental, political impacts, efficiency, effectiveness, innovativeness and fair-
ness) and to explore the results achieved on the composing indicators.
In parallel, CAPS users will be invited to fill in the UDGI, which looks like an
online questionnaire and investigates the CAPS benefits from the point of view of
their users. The information gathered by the UDGI will appear in the SAT: each
CAPS project will be able to see the opinions of its users in an aggregated,
anonymous way and it will be possible to compare the results of their
self-assessment with the point of view of their users.
IA4SI team will use all the gathered data for developing two impact assessment
reports: one will include the assessment of each CAPS project and one will analyse
the data at aggregated, domain level. Besides this, a set of best practice will be
identified and further analysed using a case-study approach.
As mentioned, each complex indicator is composed of several indicators and the
data have different measurement units such as monetary value, years, yes/no, relative
values, 1–6 points Likert scale, etc. Clearly, the data need to be treated before their
aggregation into indices. Indeed the final goal of the IA4SI methodology is to
synthesize the vertical (per category or subcategory) or transversal impacts in indices
expressed in a 0–1000 scale in order to make projects performances comparable.
Before doing so the indicators composing the complex indices will be normal-
ized using a min-max approach (the normalization is performed by subtracting the
minimum value and dividing by the range of the indicator values). If extreme
values/or outliers could distort the transformed indicator, statistical techniques can
Collective Awareness Platform for Sustainability … 113
neutralise these effects. After having normalised the indicators in a 0–1000 scale it
is possible to calculate the aggregated index for each impact subcategory simply by
using the arithmetic mean of that indicators. Recursively, in this same way, it is
possible to pass subcategory impact indices to impact area indices. The possibility
to attribute to the various indicators and indices different weight is under analysis;
this topic will be discussed with CAPS projects representatives together with the
benchmark system that is under development at the time of writing.
4 Conclusions
The methodology presented in this document constitutes a first draft that will be
tested by CAPS projects from November 2014 to the first months of 2015; the
testing coincides with the first data-gathering phase. The analysis at projects level
and at CAPS domain level will be available starting from August 2015. In the
context of this paper it was not possible to describe the indicators and variables that
constitute each index, neither it was possible to show the formulas that will be
applied and the analysis and visualisation that will be offered by the IA4SI toolkit.
All these elements are going to be the focus of future papers; in the meantime, more
information about the IA4SI project, its methodology and its development are
available at www.IA4SI.eu where a full description of the methodology is available
in the download section [17].
References
1. Arniani, M., Badii, A., De Liddo, A., Georgi, S., Passani, A., Piccolo, L.S.G., Teli, M.:
Collective Awareness Platform for Sustainability and Social Innovation: An Introduction
(2014)
2. BEPA: Empowering people, driving change. Social innovation in the European Union,
Luxemburg: Publication Office of the European Union (2011)
3. Bund, W., Hubrich, K., Schmitz, B., Mildenberger, G., Krlev, G.: Blueprint of social
innovation metrics—contributions to an understanding of opportunities and challenges of
social innovation measurement (2013). Deliverable of the Project Tepsie, EU 7FP. http://
www.tepsie.eu/index.php/publications
4. Epstein, M.J., McFarlan, F.W.: Measuring the efficiency and effectiveness of a nonprofit’s
performance, Strategic Finance, 93/4, pp. 27–34. http://www.imanet.org/PDFs/Public/SF/
2011_10/10_2011_epstein.pdf (2011). Accessed 15 March 2014
5. European Commission: Communication from the commission to the european Parliament, the
council, the European economic and social Committee and the committee of the regions
114 A. Passani et al.
Abstract Although the Business Model (BM) concept provides a convenient unit
of analysis in the business practices, BM research in the Information Systems
(IS) field emphasizes blurriness and divergences in its structure. With this paper we
provide a clarification of the BM concept and update Al-debei and Avison [1]
analysis on the BM literature. Using a structured methodology, we review the titles
and the abstracts of 108 articles from IS literature and examine a significant subset
of 49 articles. Our work contributes first, to formalize the concept of BM as
instanced in IS domain and organizes BM studies around two different frameworks.
Second, it highlights the BM research streams and their current states of the art.
Last, it discusses the current limitations of the BM studies and offers the basis for
future research.
1 Introduction
A BM represents the core business concept of a company; it depicts the logic of the
company and it outlines how a company creates and captures value [1–4]. The
concept of BM has established itself during the Internet boom, where traditional
firms transformed themselves into digital ones with the rise of the commercial use
of modern information communication technologies (ICT). Researchers agree that
the interest in the BM concept in the IS field has grown ever since. Although the
BM concept is considered applicable for all business in any sector [2], the majority
of research into BM in the IS field is concerned about software industry and
application service and infrastructure providers [5–10], online news, advertising
and social media BM [11, 12].
The BM concept appears to provide a convenient unit of analysis in business
practice; therefore in the last years we observed an increasing number of publica-
tions concerning it. The particular origins of the BM concept from diverse disci-
plines such as eBusiness and eCommerce, IS, strategy, business management,
economics, and technology [13, 14] contribute to the blurriness of the structure of
the BM research. It is interesting to notice that the BM concept and its associated
research is still considered young and new, although its appearance in scholarly
journals for almost 20 years. Therefore, this paper is motivated by the need for a
clarification of the BM concept in the IS domain. With this paper we answer to the
following research question: “What is the current understanding of the BM con-
cept?”. Our work update Al-debei and Avison [1] literature review on BM litera-
ture, where they clarify the BM concept, present a comprehensive conceptual
framework, and illustrate and discuss the BM compositional facets providing a
common and leveraged understanding of the concept. The authors [1] define the
BM as “an abstract representation of an organization, of all core interrelated
architectural, co-operational and financial arrangements designed and developed
by an organization presently and in the future, as well all core products and/or
services the organization offers, or will offer, based on these arrangements that are
needed to achieve its strategic goals and objectives” [1].
The paper is structured as follows. In the next section we describe the employed
research methods. Next, we present the literature review through concept matrices
and discuss it around two different criteria. Before presenting the conclusions, we
discuss the contributions and limitations of our work, and the future research
directions.
2 Research Methodology
To select the relevant papers that scope the literature review, we followed the
methodology proposed by [15]. We performed an electronic search on the keyword
“business model(s)” included in the title or in the abstract of the article in the
Business Model in the IS Discipline: A Review … 117
chosen time period (from January 1st 2009 to June 1st 2014) spanning leading
journals in IS discipline (this criteria has been used in similar previous work, e.g.
[16]). As a first step we selected “A+” and “A” journals, according to the ranking
proposed by [17]. The journals selected were MIS Quarterly, Information Systems
Research, Journal of MIS, European Journal of Information Systems, Information
Systems Journal, Journal of the Association for Information Systems, and Journal
of Strategic Information Systems. The following databases were used to accelerate
the identification of relevant articles: ProQuest, EBSCO, ScienceDirect, and JSTOR
archive. In an effort to broaden the search beyond the original set of journals, we
also examined cited works of potential interest in selected IS conferences pro-
ceedings [18], such as ICIS, AMCIS, HICSS, as suggested by [19, 20]. We col-
lected a total of 108 articles for the defined IS domain.
To evaluate whether inclusion of an article was warranted in the literature
review, at least one the following criteria must be satisfied:
– The article concerns with, or is relevant to, the BM concept in IS;
– The article describes or identifies BM components [21];
– The article, while concerned with other research questions and topics, discusses,
directly or indirectly, to the BM concept.
Following the above criteria, from the 108 articles we selected 49 papers in the
analysis. Our literature review is organized around two different criteria one base on
the Unified BM Conceptual Framework [1] and the other based on the BM Concept
Hierarchy [4]. We then compile concept-matrices to present the results of the
analysis.
Using the Unified BM Conceptual Framework [1], we aim to define the BM
concept and the different BM components. The Unified BM Conceptual Framework
[1] defines the BM concept comprehensively, highlighting the major facets and
aspects related to the concept, and revealing important inter-relationships. The
framework comprises four fundamental aspects. First, it defines the BM primary
dimensions—value proposition (VP), value network (VN), value architecture (VA)
and value finance (VF)—and forms a complete ontological structure of the concept.
Second, the framework organizes the BM features, also called modeling principles
as guidelines that direct the modeling course of action of BMs. Third, it explains the
BM reach, as the BM is seen as an intermediate layer between business strategy and
ICT-enabled business processes. Fourth, the framework [1] explores three major
functions of the BM within digital organizations to shed light on the practical
meaning of the concept.
Using the BM Concept Hierarchy [4], we aim to identify the different BM types
and BM instances described in the literature selected. The selected relevant liter-
ature was also classified in three different categories, according to the BM Concept
Hierarchy presented by [4]. In the literature, the BM expression can stand for BM
definition or BM components definition, for specific BM types—i.e. freemium BM
[22], and concrete real word instances of BM—i.e. Kodak BM [23]. The BM
Concept category includes authors describing the BM concept as an abstract
overarching concept that can describe all real world businesses. Authors in this
118 G. Pozzi et al.
category substantiate the conceptual aspect. The category includes the definition of
what a BM is and of what belongs in it. The BM Types category includes authors
describing BM patterns, generic but having specific common characteristics, and
BM belonging to specific industries. The BM Instance category includes authors
that describe real world BM.
The next section presents the results of the analysis of the literature on the base
of the two selected criteria.
No. Reference Value Value Value Value Conceptual Multi-level Dynamic Granular Coherent Intermediate Alignment Interceding Knowledge
proposition architecture network finance layer instrument framework capital
1 [24] X
2 [25] X X X X
3 [26] X X X X X X X
4 [27] X X X X X
5 [28] X X X X X
6 [29] X
7 [30] X X X X
8 [2] X X X X X X X
9 [31] X X X X
10 [32] X
11 [3] X X X X
12 [33] X X X X
13 [9] X X X X
14 [34] X X X X X
15 [35] X X X
16 [36] X X X X X
17 [37] X X X X
18 [38] X
19 [39] X X
20 [40] X X X X
21 [41] X X X X X X
22 [42] X X X X
23 [7] X X X X
24 [11] X X X X X
25 [8] X X X X
(continued)
G. Pozzi et al.
Table 1 (continued)
No. Reference Value Value Value Value Conceptual Multi-level Dynamic Granular Coherent Intermediate Alignment Interceding Knowledge
proposition architecture network finance layer instrument framework capital
26 [43] X X X
27 [44] X X X X X X
28 [45] X X X
29 [46] X X X X
30 [47] X X X X X
31 [48] X X X X X
32 [49] X X X X
33 [50] X X X X
34 [51] X X X X
Business Model in the IS Discipline: A Review …
121
122
The analysis of the selected literature shows the existence of four main research
streams. The first research stream comprises a flow view of the BM and thus the
process of value exchange in a business is covered [26, 34, 35, 46]. The second
stream focuses on the constitutive characteristics of the BM and on their depen-
dencies and interdependencies [42, 44]. Our examination reveals that researchers
agree on the description of the BM elements as constitutive sub-parts that offer a
structured approach for standardized description, analysis and comparisons.
Although the different nomenclatures and different arrangements of the BM com-
ponents, we state that [1] framework best represents the state of art of this research
stream. Filling one of the major BM concept gap highlighted by [2, 42] analyze the
dependencies and interdependencies that exists between business model compo-
nents. The analysis shows that almost any BM component is interconnected with
the other, making all the relations between BM components structural and undis-
puted. The third stream focuses on the BM generation, design, implementation and
evaluation methods that allow the development and the correct management of BM
instance for a specific business sector [26, 27, 41, 52]. Current state of art shows
agreement among researchers that indicate in several studies steps and modalities
for BM development. For what concern BM management and evaluation
researchers agree on the usage of a measurement system based on key performance
indicators (KPI) to align BM and operation result. This research stream also
highlights the importance of the BM in the IS field, please refer to next section for a
more complete discussion. The last research stream focuses on the adoption and the
dynamics of the BM concept in a specific industry or business sector. Examples of
this research stream can be found i.e. in [37, 50, 63] contributions that show IS
researchers interest mainly in the digital industry.
We state that the BM concept is mainly represented wordily for what concern its
definition and component description. Indeed, in our analysis, we found four
recurring forms of BM representations (BMR). A BMR is generally a framework
for representing—even graphically—the model of a specific business.
(1) The STOF framework [53] highlights four different domains—service, tech-
nology, organization and finance—that generate value for business stakeholders.
STOF BM components can be easily associated with and/or included in the four
components of the [1] framework for the thematically similarity of their meaning.
(2) The e3-value [64, 65] identifies actors and the value exchanges which occur
among these actors. These value exchanges are valued financially to understand
which economic performance each actor in the network is likely to have. The STOF
framework and the e3-value representations are used in the 1st identified BM
research stream. (3) BM Canvas or BM Ontology [52] serves as extensive
meta-model with a wide scope of applications. It is used for business modeling and
business process structuring. The focus is on the VP, as the core of the BM.
The BM right side focuses on the client perspective and revenue model. The BM
left side focuses on the activities, partners and cost structure. (4) The Unified BM
124 G. Pozzi et al.
The BM concept helps increase the mutual understanding and integration between
the business strategy and IS domain [1, 2, 4, 52, 54]. The BM is able to create a
common language, helping the diffusion of shared comprehension. Understanding a
company BM facilitates and improves the choices of IS/ICT infrastructure, of its
application portfolio, of its role and structure. The BM helps in defining a com-
pany’s goals and facilitates the engineering requirements, as IS/ICT infrastructure
has to be aligned with those goals and the business processes. The BM concept
helps to identify the indicators of the executive IS for monitoring the strategy, based
on the financial, customer, internal business and innovation learning perspectives.
Through the BM concept, entrepreneurs have to be able to answer to the questions:
“Which technology infrastructure is required and crucial to the success of my
business model?”, “How can IT support the processes and the workflow required by
BM?”, and “What information flows, processes, and workflow does my BM
require?” [52].
IS research can positively impact the discipline of strategic planning, validating
conceptual framework from design thinking with objects and from socio-technical
systems that can improve strategic planning outcomes [66]. Design processes
technique and methodology, such as ideation, customer and user insights, visual
thinking, prototyping, storytelling and scenarios could significantly improve orga-
nization’s responses to strategic questions [52, 54]. IS can address the research in
computer-aided design (CAD) to assist the process of designing strategic man-
agement objects, such as the BM [2, 4, 52, 54]. Presenting the Business Model
Toolbox, [52] state that, through BM CAD assistance, entrepreneurs are able to
create, store, manipulate and track BMs, enabling deep comprehensive analysis,
remote collaboration and quick simulations. The BM presents views of business
logic underlying the entity’s existence that meets the need of different types of
users, such as firm’s stakeholders, firm’s internal resources, and external third
parties. Among these users, IS developers, as a subset of managers and
decision-makers, require a detailed depiction of the business that facilitates systems
requirements engineering, knowledge management, and workflow and process goal
definition [4].
Business Model in the IS Discipline: A Review … 125
6 Conclusions
This paper clarifies the BM concept as a follow up of the [1] literature review. The
authors, following a structured methodology, reviewed the IS-related literature from
year 2009 to 2014, and deeply analyzed 49 papers. The authors classified current
literature according to two frameworks that highlight different aspects of the BM
concept, such as BM components, characteristics, functions, and typologies. The
result of the analysis shows the current state of the art of the BM research in the IS
field. The paper presents the research gaps that have been closed and the others are
still existent in the field.
References
1. Al-debei Mutaz M., Avison, D.: Developing a unified framework of the business model
concept. Eur. J. Inf. Syst. 19, 359–376 (2010)
2. Burkhart, T., Krumeich, J., Werth, D., Loos, P.: Analyzing the business model concept—a
comprehensive classification of literature. In: Proceedings of ICIS 2011 (2011)
3. Krumeich, J., Burkhart, T., Werth, D., Loos, P.: Towards a component-based description of
business models: a state-of-the-art analysis. In: Proceedings of AMCIS 2012 (2012)
4. Osterwalder, A., Pigneur, Y., Tucci, C.L.: Clarifying business models: origins, present, and
future of the concept. Commun. Assoc. Inf. Syst. 16, 1–25 (2005)
5. Demirkan, H., Cheng, H.K., Bandyopadhyay, S.: Coordination Strategies in an SaaS Supply
Chain. J. Manag. Inf. Syst. 26, 119–143 (2010)
6. Giessmann, A., Fritz, A., Caton, S., Legner, C.: A method for simulating cloud business
models: a case study on platform as a service. In: Proceedings of ECIS 2013 Completed
Research (2013)
7. Labes, S., Erek, K., Zarnekow, R.: Common patterns of cloud business models. In:
Proceedings of AMCIS 2013 (2013)
8. Morgan, L., Conboy, K.: Value Creation in the Cloud: Understanding Business Model Factors
Affecting Value of Cloud Computing. In: Proceedings of AMCIS 2013 (2013)
9. Rensmann, B.: Two-sided cybermediary platforms: the case of hotel.de. In: Proceedings of
AMCIS 2012 (2012)
10. Susarla, A., Barua, A., Whinston, A.B.: A transaction cost perspective of the software as a
service business model. J. Manag. Inf. Syst. 26, 205–240 (2009)
11. Malsbender, A., Beverungen, D., Voigt, M., Becker, J.: Capitalizing on social media
analysis—insights from an online review on business models. In: Proceedings of AMCIS 2013
(2013)
12. Oechslein, O., Hess, T.: Paying for news: opportunities for a new business model through
personalized news aggregators (PNAs). In: Proceedings of AMCIS 2013 (2013)
13. Pateli, A.G., Giaglis, G.M.: A research framework for analysing eBusiness models. Eur. J. Inf.
Syst. 13, 302–314 (2004)
Business Model in the IS Discipline: A Review … 127
14. Shafer, S.M., Smith, H.J., Linder, J.C.: The power of business models. Bus. Horiz. 48, 199–
207 (2005)
15. Webster, J., Watson, R.T.: Analyzing the past to prepare for the future: writing a literature
review. MIS Q. 26, 13–23 (2002)
16. Merali, Y., Papadopoulos, T., Nadkarni, T.: Information systems strategy: past, present,
future? J. Strateg. Inf. Syst. 21, 125–153 (2012)
17. Lowry, P., Moody, G., Gaskin, J., Galletta, D., Humpherys, S., Barlow, J., Wilson, D.:
Evaluating journal quality and the association for information systems senior scholars’ journal
basket via bibliometric measures: do expert journal assessments add value? Manag. Inf. Syst.
Q. 37, 993–1012 (2013)
18. Webster, J., Watson, R.T.: Analyzing the past to prepare for the future: writing a literature
review. MIS Q. 26, R13 (2002)
19. Chan, H.C., Kim, H.-W., Tan, C.W.: Information systems citation patterns from international
conference on information systems articles. J. Am. Soc. Inf. Sci. Technol. 57, 1263–1274
(2006)
20. Walstrom, K.A., Hardgrave, B.C.: Forums for information systems scholars: III. Inf. Manage.
39, 117–124 (2001)
21. Al-debei Mutaz, M., El-Haddadeh, R., Avison, D.: Defining the business model in the new
world of digital business. In: Proceedings of AMCIS 2008 (2008)
22. Leonardi, P.M.: When flexible routines meet flexible technologies: affordance, constraint, and
the imbrication of human and material agencies. MIS Q. 35, 147–168 (2011)
23. Lucas Jr, H.C., Goh, J.M.: Disruptive technology: how Kodak missed the digital photography
revolution. J. Strateg. Inf. Syst. 18, 46–55 (2009)
24. Clemons, E.K.: Business models for monetizing internet applications and web sites:
experience, theory, and predictions. J. Manag. Inf. Syst. 26, 15–41 (2009)
25. Brockmann, C., Gronau, N.: Business models of ERP system providers. In: Proceedings of
AMCIS 2009 (2009)
26. Kijl, B., Boersma, D.: Developing a business model engineering & experimentation tool—the
quest for scalable lollapalooza confluence patterns. In: Proceedings of AMCIS 2010 (2010)
27. Kijl, B., Nieuwenhuis, B.: Deploying a Telerehabilitation service innovation: an early stage
business model engineering approach. In: Proceedings of 47th Hawaii International
Conference on System Sciences (2014)
28. Feller, J., Finnegan, P., Nilsson, O.: Open innovation and public administration:
transformational typologies and business model impacts. Eur. J. Inf. Syst. 20, 358–374 (2011)
29. Tay, K.B., Chelliah, J.: Disintermediation of traditional chemical intermediary roles in the
Electronic Business-to-Business (e-B2B) exchange world. J. Strateg. Inf. Syst. 20, 217–231
(2011)
30. Zolnowski, A., Böhmann, T.: Business modeling for services: Current state and research
perspectives. In: Proceedings of AMCIS 2011 Submissions (2011)
31. Raivio, Y., Luukkainen, S., Seppala, S.: Towards open telco—business models of api
management providers. In: Proceedings of 47th Hawaii International Conference on System
Sciences (2014)
32. Lin, M., Ke, X., Whinston, A.B.: Vertical differentiation and a comparison of online
advertising models. J. Manag. Inf. Syst. 29, 195–236 (2012)
33. Di Valentin, C., Burkhart, T., Vanderhaeghen, D., Werth, D., Loos, P.: Towards a framework
for transforming business models into business processes. In: Proceedings of AMCIS 2012
(2012)
34. Moreno, C., Tizon, N., Preda, M.: Mobile cloud convergence in GaaS: a business model
proposition. In:Proceedings of 45th Hawaii International Conference on System Sciences
(2012)
35. Kundisch, D., John, T.: Business model representation incorporating real options: an extension
of e3-Value. In: Proceedings of 45th Hawaii International Conference on System Sciences
(2012)
128 G. Pozzi et al.
36. Buder, J., Felden, C.: Evaluating Business Models: Evidence on user understanding and
impact to BPM correspondence. In: Proceedings of 45th Hawaii International Conference on
System Sciences (2012)
37. Schief, M., Buxmann, P.: Business models in the software industry. In: Proceedings of 45th
Hawaii International Conference on System Sciences (2012)
38. Keen, P., Williams, R.: Value architectures for digital business: beyond the business model.
MIS Q. 37, 643–647 (2013)
39. Sitoh, M., Pan, S., Zheng, X., Chen, H.: Information system strategy for opportunity discovery
and exploitation: insights from business model transformation. In: Proceedings of ICIS (2013)
40. Giessmann, A., Legner, C.: Designing business models for platform as a service: towards a
design theory. In: Proceedings of ICIS (2013)
41. Di Valentin, C., Emrich, A., Werth, D., Loos, P.: Architecture and Implementation of a
decision support system for software industry business models. In: Proceedings of AMCIS
2013 (2013)
42. Krumeich, J., Werth, D., Loos, P.: Interdependencies between business model components—a
literature analysis. In: Proceedings of AMCIS (2013)
43. Bonakdar, A., Weiblen, T., Di Valentin, C., Zeissner, T., Pussep, A., Schief, M.:
Transformative influence of business processes on the business model: classifying the state
of the practice in the software industry. In: Proceedings of 46th Hawaii International
Conference on System Sciences (2013)
44. Zolnowski, A., Bohmann, T.: Customer integration in service business models. In:
Proceedings of 46th Hawaii International Conference on System Sciences (2013)
45. Rai, A., Tang, X.: Information technology-enabled business models: a conceptual framework
and a coevolution perspective for future research. Inf. Syst. Res. 25, 1–14, 202 (2014)
46. Ryschka, S., Tonn, J., Ha, K.-H., Bick, M.: Investigating location-based services from a
business model perspective. In: Proceedings of 47th Hawaii International Conference on
System Sciences (2014)
47. Fritscher, B., Pigneur, Y.: Computer aided business model design: analysis of key features
adopted by users. In: Proceedings of 47th Hawaii International Conference on System
Sciences (2014)
48. Kuebel, H., Limbach, F., Zarnekow, R.: Business models of developer platforms in the
telecommunications industry—an explorative case study analysis. In: Proceedings of 47th
Hawaii International Conference on System Sciences (2014)
49. Zolnowski, A., Weiss, C., Bohmann, T.: Representing service business models with the
service business model canvas—the case of a mobile payment service in the retail industry. In:
Proceedings of 47th Hawaii International Conference on System Sciences (2014)
50. Ghezzi, A., Dramitinos, M., Agiatzidou, E., Johanses, F.T., Losethagen, H., Rangone, A.,
Balocco, R.: Internet interconnection techno-economics: a proposal for assured quality
services and business models. In: Proceedings of 47th Hawaii International Conference on
System Sciences (2014)
51. Lindman, J., Kinnari, T., Rossi, M.: Industrial open data: case studies of early open data
entrepreneurs. In: Proceedings of 47th Hawaii International Conference on System Sciences
(2014)
52. Osterwalder, A., Pigneur, Y.: Business Model Generation: A Handbook For Visionaries,
Game Changers, and Challengers. Wiley, Hoboken, NJ (2010)
53. Bouwman, H., Meng Zhengjia, van der Duin, P., Limonard, S.: A business model for IPTV
service: a dynamic framework. Information 10, 22–38 (2008)
54. Osterwalder, A., Pigneur, Y.: Designing business models and similar strategic objects: the
contribution of IS. J. Assoc. Inf. Syst. 14, 237–244 (2013)
55. Oestreicher-Singer, G., Zalmanson, L.: Content or community? a digital business strategy for
content providers in the social age. MIS Q. 37, 591–616 (2013)
56. Hochstein, A., Schwinn, A., Brenner, W.: Business opportunities with web services in the case
of Ebay. In: Proceedings of 47th Hawaii International Conference on System Sciences (2014)
Business Model in the IS Discipline: A Review … 129
57. Chen, P.-Y., Chou, Y.-C., Kauffman, R.J.: Community-based recommender systems:
analyzing business models from a systems operator’s perspective. In: Proceedings of 47th
Hawaii International Conference on System Sciences (2014)
58. Baumoel, U., Georgi, S., Ickler, H., Jung, R.: Design of new business models for service
integrators by creating information-driven value webs based on customers’ collective
intelligence. In: Proceedings of 47th Hawaii International Conference on System Sciences
(2014)
59. Loebbecke, C., Tuunainen, V.: Extending successful eBusiness models to the mobile internet:
the case of Sedo’s domain parking. In: Proceedings of AMCIS (2013)
60. Clemons, E.K., Madhani, N.: Regulation of digital businesses with natural monopolies or
third-party payment business models: antitrust lessons from the analysis of google. J. Manag.
Inf. Syst. 27, 43–80 (2010)
61. Niculescu, M.F., Wu, D.J.: Economics of free under perpetual licensing: implications for the
software industry. Inf. Syst. Res. 25(173–199), 201–203 (2014)
62. Deodhar, S.J., Saxena, K.B.C., Gupta, R.K., Ruohonen, M.: Strategies for software-based
hybrid business models. J. Strateg. Inf. Syst. 21, 274–294 (2012)
63. Giessmann, A., Kyas, P., Tyrvainen, P., Stanoevska, K.: Towards a better understanding of the
dynamics of platform as a service business models. In: Proceedings of 47th Hawaii
International Conference on System Sciences (2014)
64. Gordijn, J., Akkermans, H.: A conceptual value modeling approach for e-Business
development. In: Proceedings of KCAP 2001 Workshop WS2 Knowl. E-Bus, pp. 27–38
(2001)
65. Gordijn, J., Akkermans, H.: Designing and evaluating e-Business models. IEE Intell. Syst. 16
(4), 11–17 (2001)
66. Beath, C., Berente, N., Gallivan, M.J., Lyytinen, K.: Expanding the frontiers of information
systems research: introduction to the special issue. J. Assoc. Inf. Syst. 14, (2013)
67. Lambert, S.: A conceptual framework for business model research. In: Proceedings of BLED
(2008)
IS Governance, Agility and Strategic
Flexibility in Multi-approaches Based
Management Companies
1 Introduction
In a context where changes are perpetual and multidimensional, the companies must
adapt quickly and consider this turbulent environment as an opportunity and not as
a threat. In order to grow or even to survive they need to increase their competi-
tiveness, improve their results and strengthen their agility and strategic flexibility.
Given the constantly changing environment, the ability of a company to change
direction quickly and reconfigure its strategy [27] is essential to succeed in realizing
a sustainable competitive advantage [20]. In other words, companies must have a
strategic flexibility [21].
The ISO9000 standards define the process approach as: the representation of an
organization or project as a system of processes, each production of the organi-
zation as the result of a process, and each activity of the organization should be
represented as a process.
In the last decades the “approach of the organization has profoundly altered the
process vision” [26] and “several process approaches have emerged” [5]. Indeed,
“Dealing with business processes for the development of organisations and man-
agers is a trend of growing popularity worldwide. Many different concepts, methods
and techniques have been elaborated over time” [17]. Moreover, most of the
activities of an organization (more than 90 % in some cases) can be described in
terms of processes [3].
The table describes some of these approaches (Table 1):
The “Corporate Governance” has been the subject of plethoric literature in
economics and management sciences since 1980 [25]. IT governance is the trans-
position to the IT level of the principles of corporate governance [15]. In Cobit [12]
five basic principles characterize the IT governance: Strategic Alignment [19, 35],
Value delivery [11], Performance measurement [16], Resource Management [6, 7],
and Risk management [16].
Strategic flexibility reflects the ability of a company to respond continuously to
unanticipated changes and to adjust to unexpected consequences of predictable
changes [24, 27]. A review of the literature defines the agility as the ability to adapt
IS Governance, Agility and Strategic Flexibility … 133
reactively to change [2, 9, 29]. Five principle categories characterize the agility and
strategic flexibility and can be summarized by the ability to adapt in a reactive and
adequate manner to change: Responsiveness and Reactivity [18, 31], Competency
and employees’ adaptability [8, 23], Adaptability and Re-configurability [28],
Quickness and speed [2], Operational Agility and Process-Centric View [2, 29].
Each of the different process approaches has specific objectives and these
approaches can be implemented simultaneously in the company. The literature
teaches us the benefits of each approach. But given this literature, nobody has asked
the question of what happens if these approaches were implemented simultaneously
in the same company and what are the impacts on governance, agility and strategic
flexibility.
134 M. Makhlouf and O. Allal-Chérif
In this research we conducted an in-depth case study. And we privileged the par-
ticipant observation in which the research is conducted within the company itself,
and where the status of the researcher is not highlighted.
Multiple sources of evidence were used, with a triangulation of these sources of
evidence [37]. The case study is based on observation, interviews with the different
actors of the company, as well as an exploratory, longitudinal and situational
research in the company. This allowed in addition to the characterization of the
existent situation, to make a characterization of the company and the context in
which it operates, whether from an organizational perspective, business or pro-
cesses, interest and power games, and a cartographic detail of what exists.
A detailed “analytical analysis” [36] of all documents and information available
was also performed. Indeed, this study led to analyze the specifications, reports,
trade figures, interest rates, indicators (quality, performance…). And a detailed
analytical analysis of architecture, organization and processes in place, and
approaches implemented in different directions. And an analysis of the context of
the ideas and methodologies and the identification of formal and potentially
re-usable documents, that allowed to achieve a “transversal analysis” [37].
Participant observation within this operator has lasted 2 years, during which the
researcher had direct contact with employees from all its Directions as well as its
key managers and conducted a hundred interviews with the different actors of the
company (ranging from 1 to 3 h) and conducted twenty workshops with the
different actors of the different process approaches. He conducted an in depth
documentary exploration of several tens of Giga bytes of documentations spread
over thousands of documents that have been put at his disposal and related to the
different aspects of what goes on in this company.
3.1 History
After the phase of construction, this company has gone to an extremely rapid
growth phase, which does not allow it to do an evaluation of its practices. The
priorities were to follow the growth by hiring massively and investing in infra-
structures to increase network capacity in order to receive the large number of
clients growing exponentially.
The information system is a tangle of applications that are added to each other to
meet the different needs, constantly renewed. Indeed, each new requirement cor-
responds to new processes or new applications. Add to that the instability and the
volatility of the organization, with endless reorganizations. Even nowadays, there is
at least one re-organization each year.
After the rapid growth, this company has become a very large group. And it was not
possible to continue to manage a large structure without going through processes
and a strict formalism. There is thus more and more creations of entities that have
missions to perform consolidations of projects portfolios, manage studies and pilot
projects of major information systems, improve the quality of services, streamline
investments, manage resources, etc. These missions were conducted, very often
redundantly, in several Directions. We can found them in the Directions of the
network, IS, services, customer service, marketing and commerce, customer ser-
vice, the DAF.
In few years, several directions of TELKOM have launched several process
approaches implementation projects. The table describes these projects (Table 2):
136 M. Makhlouf and O. Allal-Chérif
The implementation of these process approaches in this company has brought many
benefits. Several objectives among the expected objectives have been achieved.
This development has also generated a lot of problems.
The following paragraphs detail some of these benefits and problems.
3.3.1 Benefits
Among the benefits we can observe or deduce from our thorough analysis and
observations, is that the implementation of the approaches CMMI, and Project
Portfolio Management has facilitated the piloting of activities. The preparation,
planning and execution are more easily achievable; consolidations are more reliable
and deliverables are of higher quality. The setting up of a projects process has
enabled the definition of the project portfolio management life cycle: The steps of
preparation, planning and execution of projects portfolio are integrated into the
IS Governance, Agility and Strategic Flexibility … 137
3.3.2 Problems
Table 3 (continued)
Governance Problems
Risk Management • There is not any global and unique view of the projects portfolios in
which there are objectives achieved, risks managed, created values,
deliverables achieved and problems encountered
• The “support” to change management for the projects portfolios
management is not available
• There is not any formalized process for updating cost based
management data
• No visibility or supervision by the IS direction on hundreds of
applicative systems realized outside the IS direction and not following
the projects portfolios management process
• Important and continuous loss of many business knowledge essential
to the continuity of the activity of the company
• Important and growing psychosocial risks
We will detail the most important problems concerning governance, caused by the
simultaneous introduction of these approaches in TELKOM by grouping them
according to the five basic principles of IT governance: strategic alignment, value
delivery, performance measurement, resources management and risk management
(Table 3):
140 M. Makhlouf and O. Allal-Chérif
We will detail the most important problems concerning agility and strategic flex-
ibility, caused by the simultaneous introduction of these approaches in TELKOM,
by grouping them into five main categories that characterize the agility and flex-
ibility strategic. Namely: responsiveness and reactivity, competency and adapt-
ability of employees, adaptability and re-configurability, quickness and speed, and
finally, operational agility and process-centric view. That can be summarized by the
ability to adapt in a reactive and adequate manner to change (Table 4):
4 Conclusion
The implementation of these process approaches within TELKOM has allowed the
achievement of several objectives and obtaining improvements on several fronts.
But the implementation of these approaches was done concurrently and in the
absence of a common and global vision. In this context, these approaches have very
rapidly shown their limits, and the superposition of these approaches has been the
cause of several malfunctions or brakes for an optimal operation of the company
and its information systems. The isolated and concurrent application of these
approaches has increased the complexity of the IS, increased costs, and weakened
the performance parameters.
If one steps back, one can observe that these implementations are all trying to
follow the emerging organizational and technological currents without hardly
consider a goal of strategic alignment of information systems with business strat-
egy, or even the definition of this strategy. This is due to the lack of effort of
conceptualization and consolidation of different interpretations of the strategy of the
company on the field. It strives for interpretation, and neglect the consolidation of
operational practices. And since there is no return about the implementation of these
approaches to confront it to the company’s strategy, it inevitably leads to a problem
of strategic alignment, governance, agility and strategic flexibility.
To consolidate these results, the research presented here will be completed by other
case studies. Furthermore, the analysis showed the potential value of a global process
approach of governance of information, which will be the next step of the research.
References
1. Alavi, M., Leidner, D.E.: Review: knowledge management and knowledge management
systems: conceptual foundations and research issues. MIS. Quart. 25(1), 107–136 (2001)
2. Almahamidi, S., Awwad, A., McAdams, A.C.: Effects of organizational agility and knowledge
sharing on competitive advantage: an empirical study in Jordan. Int. J. Manag. 27(3), 387–404
(2010)
142 M. Makhlouf and O. Allal-Chérif
3. Amaravadi, C.S., Lee, I.: The dimensions of process knowledge. Knowl. Process Manag. 12
(1), 65–76 (2005)
4. Arora, T., Nirpase, A.: Next generation business process management: a paradigm shift. IEEE
Congress on Services Part I, 6–11 July 2008, p. 81, (2010)
5. Baker, G., Maddux, H.: Enhancing organizational performance facilitating the critical
transition to a process view of management. Sam Adv. Manag. J. 7(4), 40–60 autumn (2005)
6. Barney, J.: Firm resources and sustained competitive advantage. J. Manag. 17(1), 99–120
(1991)
7. Bharadwaj, A.: A resource-based perspective on information technology capability and firm
performance: an empirical investigation. MIS. Quart. 24(1), 169–196 (2000)
8. Bhattacharya, M., Gibson, D.E.: The effects of flexibility in employee skills, employee
behaviors, and human resource practices on firm performance. Int. J. Manag. 31(4), 622–640
(2005)
9. Burgess, T.F.: Making the leap to agility: defining and achieving agile manufacturing through
business process redesign and business network redesign. Int. J. Oper. Prod. Manag. 14(11),
23 (1994)
10. Cooper, R., Kaplan, R.S.: Profit Priorities from Activity-Based Costing. Harv. Bus. Rev.
(2000)
11. Corbel, P., Jean-Philippe Denis, J-Ph., Taha, R.: Systèmes d’information, innovation et
création de valeur: premiers enseignements du programme MINE France. Cigref, Cahier No 2
(2004)
12. Delavaux, J-P.: COBIT: La Gouvernance des TI et les processus – ANDSI. Association
Nationale des Directeurs de Systèmes d’Information, France (2007)
13. Deming, W.E.: Out of the Crisis. MIT Press, Cambridge (1986)
14. Feigenbaum, V.: Total Quality Control, 1st edn. McGraw-Hill, London (1951)
15. Florescu, V., Anica-Popa, L ., Anica-Popa, I.: Governance of Information System and Audit.
BCAA (2007)
16. Florescu, V., Dumitru, V.: Problematique De La Gouvernance Du Systeme D’information.
Ann. Univ. Oradea Econ. Sci. Ser. 17(4), 1381–1386 (2008)
17. Gerndorf, K.: A process view of organisations: procedural analysis. TUTWPE No 143 (2006)
18. Goldman, S., Nagel, R., Preiss, K.: Agile competitors and virtual organizations. Van Nostrand
Reinhold Publishing, New York (1995)
19. Henderson J.C., Venkatraman N.: Strategic Alignment: A Model for Organizational
Transforming via Information Technology. Oxford University Press, New York (1993)
20. Hitt, M., Keats, B., DeMarie, S.: Navigating in the new competitive landscape: Building
strategic flexibility and competitive advantage in the 21st century-. Acad. Manag. Exec. 12(4),
22–43 (1998)
21. Johnson, J.L., Lee, R.P., Saini, A., Grohmann, B.: Market-focused strategic flexibility:
Conceptual advances and an integrative model. J. Acad. Mark. Sci. 31, 74–89 (2003)
22. Juran, J.M.: La qualité dans les services. AFNOR Gestion, Paris (1987)
23. Kidd, P.T.: Agile Manufacturing: Forging New Frontiers. Addison-Wesley, Wokingham
(1994)
24. Lei, D., Hitt, M.A., Goldhar, J.D.: Advanced manufacturing technology: organizational design
and strategic flexibility. Organ. Stud. 17, 501–523 (1996)
25. Martinet, A.-C.: Gouvernance et management stratégique., avr2008. Revue Française de
Gestion, avr2008 3(183), 95–110 (2008)
26. Morley, C., Bia, M., Gillette, Y.: Processus métiers et SI. Gouvernance, management et
modélisation, 3rd edn. Management des Systèmes d’Information, Dunod (2011)
27. Nadkarni, S. Herrmann, P.: Ceo personality, strategic flexibility, and firm performance: the
case of the indian business process outsourcing industry. Acad. Manag. J. 53(5), 1050–1073
(2010)
28. Pavlou, P.A., El Sawy, O.: From IT leveraging competence to competitive advantage in
turbulent environments: the case of new product development. Inf. Syst. Res. 17(3), 198–227
(2006)
IS Governance, Agility and Strategic Flexibility … 143
29. Raschke, R.L.: Process-based view of agility: the value contribution of IT and the effects on
process outcomes. Int. J. Account. Inf. Syst. 11(4), 297–313 (2010)
30. Reed, R., Lemak, D.J., Mero, N.P.: Total quality management and sustainable competitive
advantage. J. Qual. Manag. 5, 5–26 (2000)
31. Sharifi, H., Zhang, Z.: Agile manufacturing in practice: application of a methodology. Int.
J. Oper. Prod. Manag. 21(5-6), 772–794 (2001)
32. Silva, L., Hirschheim, R.: Fighting against windmills: strategic information systems and
organizational deep structures. MIS. Quart. 31(2), June 2007 (2006)
33. Smith, H., Neal, D., Ferrara L., Hayden, F.: The Emergence of Business Process Management,
CSC’S Research Services (2002)
34. Tao, Y., Zhu, G., Xu, Z., Liu B.: A research on bpm system based on process knowledge. In:
IEEE Conference on Cybernetics and Intelligent Systems, pp. 69–75, 21–24 Sept. 2008
35. Wilkin, C.L., Chenhall, R.H.: A review of IT governance: a taxonomy to inform accounting
information systems. J. Inf. Syst. 24(2), 107–146 Fall (2010)
36. Yin, R.K.: Applications of Case Study Research. Sage, Thousand Oaks (2003)
37. Yin, R.K.: Case Study Research Design and Methods. In: Applied social research methods
series. Sage (2009)
38. Zakuan, N.M., Yusof, S.M., Laosirihongthong, T.: Reflective review of relationship between
total quality management and organizational performance. In: 4th IEEE International
Conference on Management of Innovation and Technology, ICMIT 2008, p. 444, 21–24
Sept. 2008
Part II
ICT and Knowledge Management
Information, Technology, and Trust:
A Cognitive Approach to Digital Natives
and Digital Immigrants Studies
1 Introduction
Some studies suggest that the intense use of information and communication
technologies (ICTs) in the early years of a person’s life could contribute to the
development of peculiar behavioral habits and cognitive structures [7, 31, 35, 36,
44]. Such circumstance is usually linked with the consideration of the existence of a
group of individuals who had the chance to heavily interact with ICTs since the
early stages of their lives since they were born in a world permeated by these
technologies. Those who had this possibility are called by the literature with
F. Marzo (&)
LUISS Guido Carli University, Rome, Italy
e-mail: fmarzo@luiss.it
A.M. Braccini
Dipartimento di Economia e Impresa (DEIm), Università degli Studi della Tuscia,
Viterbo, Italy
e-mail: abraccini@unitus.it
different names [49]. Tapscott first [44] describes a net generation as the cohort of
individuals who grew up in a digitalized world. Prensky uses the term digital
natives [35, 36] to indicate those that were born in such world, and calling digital
immigrants who encountered ICTs later, after the birth. McMahon and Pospisil [31]
instead describe Howe and Strauss’ millennials [24] as individuals used to interact
with technologies.
Besides the differences in the terms used by the authors, there appears to be a
common ground among these profiles [6–8]: the frequent and intense interaction
with the ICTs that these individuals had [49]. As reported by Vodanovich et al.
[46], in their life the natives have on average spent about 20,000 h online using
different kinds of transactional systems and decision support systems to collect
information, to establish social relationships, to have fun, or to cooperate with
others.
The topic of digital natives attracted large interest in the literature but recently a
wave of more critical studies are challenging some of the assumptions over which
the concept of digital native lays [7, 49]. Following several conceptual works,
empirical evidences contributed to identify a great internal variance in the char-
acteristics of this generation [2], and further empirical investigations are necessary
[23].
Given that technology influences organizational norms, values, and behavior
[33, 47], investigating the digital natives phenomenon is important both from the
perspective of information systems [46] and from the perspective of organizational
behavior [6]. In this paper we propose to study these individuals (that from this
point on we will simply call digital natives) from a cognitive approach to shed light
on the trust and control dynamics that underpin digital natives cooperation behavior
in teams. To this regard in this paper we motivate and design an experiment-based
research strategy. The paper is structured as follows. In Sect. 2 we will discuss the
literature on digital natives, and in Sect. 3 we will describe the literature on trust and
control. The designed experiment research strategy will be described in Sect. 4, and
discussed in Sect. 5. Section 6 will conclude the paper with some final remarks.
The debate on digital natives is centered on the pivotal assumption that the abun-
dant presence of ICT in a person’s life, since the birth, might have allowed them to
develop peculiar behavioral skills, habits, and norms, both in relation with the way
they use the technology, and with the way they interact with other individuals and
cooperate [46, 49]. The literature describes a set of traits that qualify these natives
that is not always consistent.
A first aspect to be considered is the problem of the age. Consistent with the
understanding of a cohort as a group of individuals who share the same chrono-
logical traits [13, 38], the literature frequently identifies digital natives only
according to their birthdate: whoever is born after a specific year is a digital native
Information, Technology, and Trust: A Cognitive Approach … 149
Adding to this the habit of interacting with ICT tools could also have produced
in them the need to be in control of the situations they find themselves involved in
[41]. When interacting with the ICT the user is usually in the control of the software
system being used. Such habit and capability of controlling ICT systems is sup-
posed to have left digital natives with the need to be able to control the outer
environment. At the same time being able to control such complex software sys-
tems also induced in digital natives a sense of self confidence [39] that might also
go beyond the technological aspects. In some cases this self confidence becomes a
sense of trust [49] that influences both their relation with technology and people.
Such mixed and sometimes conflicting set of behavioural traits calls for an
empirical investigation of how digital natives actually behave in teams and or-
ganisations. In particular, our work intends to get insights into the balance/conflict
between the sense of control and the sense of trust of digital natives, to see if
interaction with technology is significant for the explanation of a purported different
behaviour. This conflicting dichotomy can be addresses by a study using the
implementation of an empirical investigation based on a rich model of trust, which
has been addressed in several cognitive studies [17, 20, 37] and take advantage of a
quite complete cognitive analysis [16]. Moreover, it seems that such an insight can
be a very promising starting point for empirical studies of digital natives, being their
cognition under investigation.
behavioral economics field, to trust means to accept some risk and count on some
other agent or process [4]. For these assumptions trust has been defined as “the
willingness of a party to be vulnerable to the actions of another party based on the
expectation that the other party will perform a particular action important to the
trustor, irrespective of the ability to monitor or control that other party” [30].
Although this definition is able to catch a very crucial point of the decision and
action of trust, some important psychological aspects are missing. In order to
include them into the concept and, then, to correctly model organizational trust, we
need to integrate (i) some considerations about what the trustor believes about
trustee’s internal attitudes, and (ii) a measure of subjective propensity of the trustor
to accept uncertainty, risk and ambiguity [1, 16]. These aspects, deeply dependent
both on context and on subjective and cultural diversity, represent the core point for
the present work. Their crucial features will be clearer once the cognitive model of
trust and control will be presented.
induce Nick to do what she wants or to have him at her disposal: Eliza can paid for
Nick’s service and this investment is a real bet on him [16]. Thus, we can say that
when Eliza trusts Nick there are two risks: (a) the risk of failure, the frustration of
her goal or of the entire plan, and (b) the risk of wasting her efforts and investments.
Therefore, the act of trusting/reliance is a risky activity: it presupposes some
uncertainty and it requires some predictability of trustor’s behavior.
This subjective perception of risk and this degree of trust can either be due to
lack of knowledge, incomplete information, dynamic world, or to favorable and
adverse probabilities.
When applied to a cognitive, intentional agent, the disposition belief must be
supported by other beliefs: (1) willingness belief (Eliza believes that Nick has
decided and intends to do the action required by her—trust requires modeling the
mind of the other) and (2) persistence belief (Eliza should also believe that Nick is
stable enough in its intentions, that has no serious conflicts about or that he is not
unpredictable, otherwise she might change her mind).
Trust can imply (either implicitly or explicitly) the subjective probability of the
successful performance of a given behavior. It is on the basis of this subjective
evaluation of risk that someone decides to rely on someone else. However, the final
probability of the realization of related goals should be decomposed into the prob-
ability of the trustee to perform the required action, that derives from the probability
of internal attribution (such as willingness, persistence, engagement, competence)
and the probability of having the appropriate conditions (external attribution,
including absence of interferences) [16].
Environmental and situational trust [15] are aspects of the external trust. Is it
important to stress that when the environment and the specific circumstances are
safe and reliable, less trust is necessary for delegation. Conversely, the stronger is
the trust relationship, the smaller is the need of a safe and reliable environment and,
then, of external monitoring and authority. Therefore, we can account for a
‘complementarity’ between internal and external components of trust. However,
when trust is not there, there is something that can replace it (i.e. surveillance,
contracts etc.). It is just matter of different kinds or better facets of trust. From this
perspective an important role is played by control.
The control can be considered as a meta-action aimed both at ascertaining
whether another action has been successfully executed or if a given state of the
world has been realized or maintained (feedback, checking) and at dealing with the
possible deviations and unforeseen events in order to positively cope with them
(intervention).
Information, Technology, and Trust: A Cognitive Approach … 153
A perspective of duality between trust and control is very frequent and at least
partially valid [15]. Control and normative remedies have been described as weak,
impersonal substitutes for trust, or as functionally equivalent mechanisms, since to
reach a minimum level of confidence in cooperation, partners can use trust and
control to complement each other [40]. From the socio-cognitive perspective on
trust, control is seen as antagonistic to strict trust (considered as just internal
attribution): if there is trust there is no need for control. Instead, when we consider
the broad form of trust, that include both internal and external attribution, we can
say that control can contribute to create and increase trust, as well as it can complete
and complement it [16].
Building over this literature our research project aims at investigating if the trust and
control predisposition of digital native significantly differs from those of the immi-
grants. A possible way to study the willingness to trust is to understand how people
act when the possibility of controlling others’ actions is represented by the possibility
of punishing them [9]. Another process that can easily increase trust pre-disposition is
the possibility of introducing some form of insurance, so that the loss deriving from
betrayal is significantly reduced [5]. We aim at testing both mechanisms in order to
find out potential patterns in digital natives needs of control, on one side, and to
discover possible differences in which form of control they prefer to use. In other
terms, we posit the existence of a relationships between two individuals, I and R, and
we posit that such relationship involves trust and control dynamics among them. We
therefore aim at answering the following research question:
RQ: Is it the behavior of digital natives significantly different from that of digital
immigrant?
P1: How digital natives and digital immigrants act when they are offered the
possibility to control the action of the other individual trough a form of
punishment.
P2: How digital natives and digital immigrants act when they are offered the
possibility of controlling the action of the other individual trough a form of
insurance.
P3: The existence of differences in the preferences of forms of controlling between
digital natives and digital immigrants.
154 F. Marzo and A.M. Braccini
The experiments we will run consist in a modified trust game [3]. In the trust
game two different players are involved:
• I: The investor;
• R: The recipient.
In the trust game the investor I is endowed with a sum of money, which she can
keep or invest with the recipient R. The decision to invest implies the existence of
trust between I and R. The amount I decides to invest is tripled and sent to the
trustee R who then decides what fraction to return to the investor. Both players have
different strategies to execute and associated to each combination of strategies there
are different payoffs p. In our experiment the first mover I has the possibility to
invest a sum of money 1 executing one of the following two strategies
sn 2 S ¼ ðis1 ; is2 Þ, where:
• is1 → I trusts R (I decides to invest);
• is2 → I does not trust R (I decides not to invest).
If the strategy is1 is executed by the first mover I, the second mover R has the
possibility to execute two subsequent strategies tn 2 T ¼ ðrt1 ; rt2 Þ, where:
• rt1 → R trusts I;
• rt2 → R does not trust
8
>
> pi ðis1 ; rt1 Þ ¼ b
< rt1 !
is1 !
p
r ðis1 ; rt1 Þ ¼ b
>
> pi ðis1 ; rt2 Þ ¼ l
: rt2 ! ð1Þ
p r ðis1 ; rt2 Þ ¼ h
pi ðis2 Þ ¼ m
is2 !
pr ðis2 Þ ¼ m
If R executes rt1 then the amount of money triples ð1 ¼ 31Þ. Following the
choices of the two players the set of the subsequent payoffs p (h highest, b better, m
moderate, and l lowest) (are assigned to each player according to the combination
of strategies executed (see Eq. 1). In all these scenarios we posit the following
conditions.
h [ b [ m [ l havingðh þ 1Þ ¼ ðb þ bÞ ð2Þ
The modification of the basic game consist in the fact that the first mover’s
expected value from trusting can be affected by decreasing ðh h Þ the highest
payoff the counterpart receives if he is a betrayer, and/or by increasing ðl ¼ lþ Þ the
lowest payoff she receives if her trust is betrayed. The former consists in the case of
a punishment (that can be referred to as “securing revenge”), the latter consists in
the case of an insurance (which we can consider as “securing protection”). Such
Information, Technology, and Trust: A Cognitive Approach … 155
choice shall be taken at the beginning of the game. The different alternatives along
with the payoffs of the game are graphically summarized in the tree diagram shown
in Fig. 1.
The subjects I and R involved in the experiments can be of different kinds
k 2 K ¼ ðnat; natÞ, where knat indicates a digital native, and knat indicates a non
digital native. The experiment will consist of several rounds of the game involving
a mix of different subjects to cover all the four possible combinations described
below.
A: Inat Rnat
B: Inat Rnat
ð3Þ
C: Inat Rnat
D: Inat Rnat
We will profile natives and non natives prior the participation to the experiment
through the usage of a measurement scale [7] and a basic computer skill test. We
will run several experiments with different groups of participants to ensure we have
an equal number of observations for each of the previously described four com-
binations. The final set of empirical evidences shall contain at least 300 observa-
tions for each group of participants. To increase the relevance of the study, as well
as its validity, experiments execution will be aimed at collecting evidences
achieving heterogeneity across the following dimensions: age, degree, and census.
Furthermore data will be collected in an international context, including in the
experiment subjects from countries different from Italy, to include in the analysis
156 F. Marzo and A.M. Braccini
the factors related to the technology level and technology regulation. At the end it
will be possible to observe inter group differences in the following total and average
payoffs, again for each of the four possible combinations of subjects.
P
Pik ¼ P ðpik ðis1 ; rt1 Þ þ pik ðis2 ; rt2 Þ þ pik ðis2 ÞÞ
Prk ¼ ðprk ðP
is1 ; rt1 Þ þ prk ðis2 ; rt2 Þ þ prk ðis2 ÞÞ
ðpik ðis1 ;rt1 Þþpik ðis2 ;rt2 Þþpik ðis2 ÞÞ ð4Þ
lðPik Þ ¼ P n
ðprk ðis1 ;rt1 Þþprk ðis2 ;rt2 Þþprk ðis2 ÞÞ
lðPrk Þ ¼ n
6 Conclusion
In this paper we have motivated, presented, and discussed a research project for an
empirical investigation of digital natives behavioral traits, specifically referring to
trust and control dynamics, using a cognitive theoretical background. The aim of the
paper is to design an experiment based empirical study that might provide an insight
into psychological aspects whose dynamics might influence individuals’ behavior in
teams. The study described in this paper is framed in a wider research project that
will involve a cross-methodological approach mixing qualitative and quantitative
analysis, on one side, and experimental and on-field data collection on the other. We
posed the bases of this ambitious path in this paper, in which we presented the design
of and experiment-based research strategy to study trust and control of digital
natives. This experiment, based on a modified version of the trust game, is aimed at
working as a first step in this research program. After data collection and analysis
further investigations will allow for deeper studies on what kind of differences exist
and how their dynamics work (i.e. how to possibly manipulate these dynamics to
enhance team cooperation when digital natives are involved).
References
1. Basaglia, S. et al.: Team level antecedens of individual usage of a new technology. In:
Proceedings of the 16th European Conference on Information Systems. Galway, Ireland
(2008)
2. Bennett, S., et al.: The “digital natives” debate: a critical review of the evidence. Br. J. Educ.
Technol. 39(5), 775–786 (2008)
3. Berg, J., et al.: Trust, reciprocity, and social history. Games Econ. Behav. 10(1), 122–142
(1995)
4. Bohnet, I., Zeckhauser, R.: Trust, risk and betrayal. J. Econ. Behav. Organ. 55(4), 467–484
(2004)
5. Bohnet, I., et al.: The elasticity of trust: how to promote trust in the Arab Middle East and the
United States. In: Kramer, R.M., Pittinsky, T.L. (eds.) Restoring Trust in Organizations and
Leaders: Enduring Challenges and Emerging Answers. Oxford University Press, Oxford
(2012)
6. Braccini, A.M.: Does ICT influence organizational behaviour? An investigation of digital
natives leadership potential. In: Spagnoletti, P. (ed.) Organization change and Information
Systems—Working and Living Together In New Ways, pp. 11–19. Springer, Berlin (2013)
7. Braccini, A.M., Federici, T.: A measurement model for investigating digital natives and their
organisational behaviour. In: Proceedings of the 2013 International Conference on Information
Systems (ICIS 2013). Milano (2013)
8. Braccini, A.M., Federici, T.: Investigating digital natives and their organizational behavior: a
measurement model. In: Visintin, F., et al. (eds.) Organising for Growth: Theories and
Practices. CreateSpace Independent Publishing Platform, Udine (2014)
9. Brandts, J., Rivas, F.M.: On punishment and well-being. J. Econ. Behav. Organ. 72(3), 823–
834 (2009)
10. Brown, C., Czerniewicz, L.: Debunking the “digital native”: beyond digital apartheid, towards
digital democracy. J. Comput. Assist. Learn. 26(5), 357–369 (2010)
158 F. Marzo and A.M. Braccini
11. Cahill, T.F., Sedrak, M.: Leading a multigenerational workforce : strategies for attracting and
retaining millennials. Front. Health Serv. Manag. 29(1), 3–16 (2011)
12. Carillo, K. et al.: An investigation of the role of dependency in predicting continuance
intention to use ubiquitous media systems: combining a media system perspective with
expectation-confirmation theories. In: Proceedings of the European Conference on Information
Systems (ECIS). Tel Aviv, Israel (2014)
13. Carlsson, G., Karlsson, K.: Age, cohorts and the generation of generations. Am. Sociol. Rev.
35, 710–718 (1970)
14. Castaldo, S., et al.: The meaning(s) of trust. a content analysis on the diverse
conceptualizations of trust in scholarly research on business relationships. J. Bus. Ethics 96
(4), 657–668 (2010)
15. Castelfranchi, C.: The role of trust and deception in virtual societies. In: Proceedings of the
34th Annual Hawaii International Conference on System Sciences, p. 8 IEEE Comput. Soc.
(2001)
16. Castelfranchi, C., Falcone, R.: Trust Theory: a Socio-Cognitive and Computational Model.
Wiley, Chichester (2010)
17. Castelfranchi, C., Falcone, R.: Social trust: a cognitive approach. In: Castelfranchi, C., Tan,
Y.-H. (eds.) Trust and Deception in Virtual Societies, pp. 55–90. Academic Publishers,
Kluwer (2001)
18. Cummings, L.L., Bromiley, P.: The organizational trust inventory. Trust in organizations:
Frontiers of theory and research, pp. 302–330. SAGE Publications Inc., Thousands Oaks
(1996)
19. Das, T.K., Teng, B.-S.: The risk-based view of trust: a conceptual framework. J. Bus. Psychol.
19(1), 85–116 (2004)
20. Finin, T. et al.: Information agents: the social nature of information and the role of trust. In:
Klusch, M., Zambonelli, F. (eds.) Cooperative Information Agent V, pp. 208–210 .Springer
(2001)
21. Goleman, D.: What makes a leader? Harv. Bus. Rev. 82(1), 82–91 (2004)
22. Hargittai, E., Hinnant, A.: Digital inequality—differences in young adults’ use of the internet.
Commun. Res. 35(5), 602–621 (2008)
23. Helsper, E.J., Enyon, R.: Digital natives: where is the evidence? Br. Educ. Res. J. 36(3), 503–
520 (2010)
24. Howe, N., Strauss, W.: Millennials Rising: the Next Great Generation. Vintage, New York
(2000)
25. Keif, M., Donegan, L.: Recruiting Gen X and millennial employees to grow your business.
2006 forecast. Technol. Trends Tactics 18(1), 89–92 (2006)
26. Kupperschmidt, B.R.: Understanding net generation employees. J. Nurs. Adm. 31(12), 570–
574 (2001)
27. Luhmann, N.: familiarity, confidence, trust: problems and alternatives. In: Gambetta, D. (ed.)
Trust: Making and Breaking Cooperative Relations, electronic edn, pp. 94–107. Blackwell
Publishers Ltd, Oxford (2000)
28. Margaryan, A., et al.: Are digital natives a myth or reality? University students’ use of digital
technologies. Comput. Educ. 56(2), 429–440 (2011)
29. Marzo, F., Castelfranchi, C.: Trust as individual asset in a network: a cognitive analysis. In:
Spagnoletti, P. (ed.) Organization Change and Information Systems, LNISO, vol. 2, pp. 167–
175. Springer, Heidelberg (2013)
30. Mayer, R.C. et al.: An Integrative Model of Organizational Trust. Acad. Manag. Rev. 20(3),
709 (1995)
31. McMahon, M., Pospisil, R.: laptops for a digital lifestyle: millennial students and wireless
mobile technologies. In: Proceedings of Ascilite Conference, pp. 421–431. Brisbane (2005)
32. Oblinger, D.G., Oblinger, J.L.: Is it age or IT: first steps toward understanding the net
generation. In: Oblinger, D.G., Oblinger, J.L. (eds.) Educating the Net Generation, pp. 2.1–
2.20, North Carolina State University (2005)
Information, Technology, and Trust: A Cognitive Approach … 159
33. Orlikowski, W.J., Robey, D.: Information technology and the structuring of organizations. Inf.
Syst. Res. 2(2), 143–169 (1992)
34. Pennarola, F., Caporarello, L.: Enhanced class replay: will this turn into better learning? In:
Wankel, C., Blessinger, P. (eds.) Increasing Student Engagement and Retention Using
Classroom Technologies: Classroom Response Systems and Mediated Discourse
Technologies, pp. 143–162. Emerald Group Publishing Ltd (2013)
35. Prensky, M.: Digital natives. Digital Immigr. Horiz. 9(5), 1–6 (2001)
36. Prensky, M.: Digital natives, digital immigrants, part II: do they really think differently?
Horizon 9(6), 1–9 (2001)
37. Falcone, R. et al.: A fuzzy approach to a belief-based trust computation. In: In: Falcone, R.,
Singhr, M., Tan, Y.H. (eds.) Trust, Reputation, and Security: Theories and Practice, pp. 73–86.
Springer, Heidelberg (2003)
38. Rhodes, S.: Age-related differences in work attitudes and behavior: a review and conceptual
analysis. Psychol. Bull. 93, 328–367 (1983)
39. Schewe, C.D., et al.: “If you’ve seen one, you’ve seen them all!” are young millennials the
same worldwide? J. Int. Consum. Mark. 25(1), 3–15 (2013)
40. Sitkin, S.B., Roth, N.L.: Explaining the limited effectiveness of legalistic “remedies” for
trust/distrust. Organ. Sci. 4(3), 367–392 (1993)
41. Smith, K.T.: Work-life balance perspectives of marketing professionals in generation Y. Serv.
Mark. Q. 31(4), 434–447 (2010)
42. Sorrentino, M., Niehaves, B.: Intermediaries in e-inclusion: a literature review. In: Proceedings
of the 43rd Hawaii International Conference on Information Systems (HICSS) (2010)
43. Spagnoletti, P. et al.: Exploring foundations for using simulations in IS research. In:
Proceedings of the 34th International Conference on Information Systems, pp. 1–15. Milan
(2013)
44. Tapscott, D.: Growing up Digital: The Rise of the Net Generation. McGraw-Hill, New York
(1998)
45. Vitari, C., Piccoli, G., Mola, L., Rossignoli, C.: Antecedents of IT dynamic capabilities in the
context of digital data genesis. In: Proceedings of the 20th European Conference on
Information Systems. Barcelone, Spain (2012)
46. Vodanovich, S., et al.: Digital natives and ubiquitous information systems. Inf. Syst. Res. 21
(4), 711–723 (2010)
47. Vom Brocke, J. et al.: Value assessment of enterprise content management systems: a
process-oriented approach. In: D’Atri, A. and Saccà, D. (eds.) Information Systems: People,
Organizations, Institutions, and Technologies, pp. 131–138. Physica-Verlag HD, Heidelberg
(2010)
48. Yadin, A.: Millennials and privacy in the information age: can they coexist ? IEEE Technol.
Soc. Mag. 31(4), 32–38 (2012)
49. Zimerman, M.: Digital natives, searching behavior and the library. New Libr. World. 113(3-4),
174–201 (2012)
When Teachers Support Students
in Technology Mediated Learning
Abstract This paper focuses on information technology adoption and use within
the education sector. We have analyzed the impact on learning effectiveness of
technology mediated learning environments, namely characterized by the adoption
of tablet based technologies, as a revolutionary complement to traditional
teaching/learning techniques. Our research analyzes the effect of “Support
Activities” on grades. “Support Activities” are defined in this paper as the set of
constructs like “Teachers’ Encouragement”, “Classmates’ Encouragement” and
“Technical Support Availability”. Grades are used as a measure of learning effec-
tiveness. A sample of 370 students participated in our study, being attendants of
experimental classes using tablets as ordinary working tool to access to digital
resources. Our mainstream theory reference was built on the theoretical foundations
of Technology Acceptance Model, by comparing the perceived effect of those
constructs between grade ranges. Finally, the experimental sample was compared to
classes where the same teachers used traditional learning resources. The aim of this
work is to give a practical understanding of support factors influencing
tablet-mediated learning effectiveness. In particular, our findings show the differ-
ences between scientific and humanistic subjects. Our research confirms that
technology alone does not revolutionize teaching and learning; nonetheless, it
contributes to an improved experience if support initiatives are deployed.
Authors are grateful to Impara Digitale, the non profit association that authored the experimental
teaching and learning environment described in the paper.
1 Introduction
The traditional innovation model, in computer-related domains, has been ruled for
years by military forces, research centers and big corporations, being the exclusive
actors, pushing the technological frontier. They did so by addressing huge amounts
of R&D money to corporate customers-dedicated products and projects. If a
potential market was envisioned, and if a mass consumption adapted version could
be manufactured, the technological innovation could later diffuse among individual
consumers. There is plenty of historical evidence of these processes. For example,
the Defense Advanced Research Projects Agency (DARPA), an agency for military
technology research, built the Internet in the 60s. Similarly, the Personal Computer
is an adaptation and evolution of previously existing mainframes. More recently,
accelerometers, today commonly plugged in smartphones, were originated by
military research.
Nowadays, we are experiencing an important change of paradigm, towards a
model where the innovation direction has reversed. Today’s innovative devices
were originally born to fulfill individual consumers’ needs. This process is also
referred to as “consumerization of IT”. One firm that above all contributed to this
paradigm shift is Apple. In this work we are focusing on the educational sector,
being greatly influenced by this consumerization wave. We argue that its activities
are positioned at the intersection between a strictly personal use and a work-specific
application. In fact, schools act as the bridge to lead students to the work worlds,
and at the same time they may profit from widely adopted technologies, by
leveraging on the diffused practice of Bringing Your Own Device (BYOD). What
happens when students bring their tablets to school? Could the teaching/learning
environment be revolutionized thanks to this consumer oriented technology?
While there have been past initiatives on ICT in education, they were limited to the
introduction of digital devices and isolated competences within the learning sector.
Not enough attention was paid to integration and support actions. Devices were placed
in separated classrooms and competences were isolated in a minority of professors in
the scientific areas. As a result, until the late 90s, IT was considered a facultative—and
maybe also superfluous—extension of certain learning activities [1], while consid-
ering pivotal the attention toward students’ learning and satisfaction [2–4].
In Italy, According to “Growth 2.0” Decree (also known as “E-textbooks Law”),
starting with school year 2014–2015, all schools are supposed to adopt digital
books or at least to mix traditional sources with digital ones [5–7]. But the real point
is how to leverage on this opportunity. A pioneer number of high schools in the
country anticipated the wave, by launching in 2010 an experimental
teaching/learning project. The idea was to ask students, and their respective fami-
lies, to swap the budget dedicated to textbooks into the purchase of a tablet, either
iOS or Android platform. As of this agreement, the school will train faculty to teach
by leveraging the tablet and digital resources in the classroom, guaranteeing the
achievements of the same results, as long as all the students in the class carry their
tablet to school every day.
When Teachers Support Students in Technology Mediated Learning 163
Our research project started from an alarming skepticism: is there any risk that all of
this technology deployment will result in no or little use? Despite the advancements
in technology and the increasing investments in its adoption, the problem of
unutilized systems is serious. Studies on this trend often call it “productivity par-
adox”, as breakthroughs in information technology brought about poor productivity
growth [8–10]. This calls for a better understanding on the deployment of tech-
nology in organizations and their user acceptance. Since the early 90s, a new
literature branch has emerged based on the Technology Acceptance Model. The
structure of the data analyzed in this work was designed based upon the Technology
Acceptance Model (from now on TAM). TAM is an information systems theory
introduced by Davis in 1989, its major extensions being the TAM2 [11, 12] and the
Unified Theory of Acceptance and Use of Technology (from now on UTAUT [13]).
It has also been proposed a TAM3 [14]and a UTAUT2 [15]. The core concept of
the TAM is that there are a number of factors influencing how people react to and
therefore “accept” a new technology. In the original version of TAM those forces
are:
• Perceived Usefulness, described as the perceived job performance enhancement
due to the use of a particular system [16];
• Perceived Ease of Use, described as the perceived degree to which a person
finds using a particular system free from effort [16];
• External Variables, or “External Stimulus”, are system design features and all
external variables that may influence user’s perception of use.
This theory is widely accepted and consolidated. Many scholars have provided
empirical evidence of this model’s validity and reliability via replications and
re-examinations [17–21]. Because of its reliable foundations, this model has found
many extensions to explain the effect of other factors in technology acceptance. In
TAM2 extended model [12] found that user acceptance was significantly influenced
by both social influence and cognitive instrumental processes in mandatory settings.
Social influence is described as the set of subjective norm, voluntariness and image
(i.e. social status). Cognitive instrumental processes determining perceived use-
fulness are instead: job relevance, output quality, result demonstrability and per-
ceived ease of use. Therefore, the psychological and social components started to
gain increasing importance in such technical and technological field. In the pro-
posed unifying theory or UTAUT [13] social influence constructs were found
significant in mandatory settings only. Moreover, the determinants of intention
varied over time, with some variations from significant to not significant as expe-
rience increased. Performance expectancy, effort expectancy, social influence, and
facilitating conditions were found to be direct determinants of user acceptance and
technology use. In the work of [13], facilitating conditions are described as the
perceived support of organizational and technical infrastructure to the use of
information systems. This concept of facilitating resources, as perceived existence
164 L. Caporarello et al.
the designer of the system is definitely contributing to something that has a very
high social value, i.e. better learning for young generations.
In 2010 a 2 year pilot experiment started in selected classes of one Italian high
school. The experiment rapidly spread over the country and a network of 14 par-
ticipating institutions was gathered at the beginning of the school year 2012/2013.
Each school proposed one or more of its high classes (average size of 25 students):
students were asked to buy their own tablet—as substitute of textbooks—and bring
it to school every day. Teachers were trained to restructure their teaching syllabus in
order to leverage digital resources, by accessing to (but not only) a centralized
database of certified public available sources on all subjects taught (i.e. mathe-
matics, Italian literature, history, physics, chemistry, biology, music, etc.).
A constructivist learning approach was used to design the whole learning calendar:
students were asked to learn and interact in teams and individually, supported by
their teachers. It is important to remark that in Italy the single class is a strong
organizational unit. In fact, the student group stays the same not only throughout
the day, but also over the whole school cycle (5 year term for high school grade).
Similarly, the group of teachers follows the class throughout its entire cycle.
Regular tests were held along the school year—as with traditional classes (text
based learning) in the respective institutions—and each student received grades and
feedbacks. Each school appointed a control sample, i.e. one or more class unit with
traditional teaching and learning methods, using the same faculty body of the
experimental class. This allowed for a close comparison that controls for teachers’
method and grades policy. While the resources and tools are different, the studied
contents are the same. After the data cleaning, our valid dataset has an experimental
sample of 370 students of 21 classes in 9 different high schools.
Each student participating to the study was profiled anonymously (his/her
identity was hidden with a numeric code) and filled out an entry (beginning of the
school year) and an exit questionnaire (end of the school year). Questionnaires were
built around the TAM described earlier. 13 constructs were identified and every
survey question is linked to a construct’s measurement1. Every schools’ registrar
provided the whole grade record (all the subjects learned) for each student partic-
ipating to the study.
The ultimate aim of the study was to inquire about the effectiveness of the
experimented teaching and learning methods: did the analyzed technology mediated
learning method favor a better learning compared to traditional teaching approa-
ches? Which are its direct and indirect benefits? More precisely, our research
questions are listed below:
1
They are: (1) Perceived Usefulness of technology, (2) Perceived Ease of Use, (3) Attitude:
Satisfaction, (4) Attitude: Preference, (5) Intention to use, (6) Perceived Advantage of technology,
(7) Perceived Teachers’ encouragement, (8) Perceived Classmates’ encouragement, (10) Awareness
of true technology potential, (11) Internet Access, (12) Technical Support, (13) Previous Experience
with internet and computers, (14) Self Efficacy in the use of Internet.
When Teachers Support Students in Technology Mediated Learning 167
Hp1. Students perceive technology as useful, but they do not sense a comparative
advantage in relation to books, unless they have an effective encouragement from
their teachers, that help them use the technology as a real tool for their studies.
To test the first hypothesis we run a regression where performance was the
dependent variable, and constructs were the independent ones. In particular we used
all of the constructs except: Intention to use, Awareness of true technology potential
and Self-Efficacy in using the computer and the Internet.
The first model runs as follows:
(1) Grades = a + b1 (Usefulness) + b2 (Ease) + b3 (Satisfaction) + b4
(Preference) + b5 (Advantage) + b6 (Teachers) + b7 (Classmates) + b8
(Internet) + b9 (Support) + b10 (Experience)
Results from the regression are shown in the table below:
Thus, it can be inferred that each part of the first hypothesis is confirmed:
1a. Perceived Usefulness has a positive (coeff. = 0.261) significant
(p-value = 0.019) effect on students’ performance.
1b. Perceived Advantage of Technology has a negative (coeff. = −0.545) significant
(p-value = 0.000) effect on students’ performance.
1c. Teachers Encouragement has a positive (coeff. = 0.462) significant
(p-value = 0.000) effect on students’ performance.
Also the second hypothesis is confirmed by this regression:
2. Classmates’ encouragement has a positive (coeff. = 0.160) but marginally sig-
nificant (p-value = 0.075) effect on students’ performance.
When Teachers Support Students in Technology Mediated Learning 169
(continued)
Model Unstandardized Standardized coefficients t-ratio p-value
coefficients
B Std. error Beta
Preference −0.032 0.131 −0.036 −0.243 0.808
Advantage −0.490 0.118 −0.547 −4.136 0.000
Teachers 0.247 0.112 0.315 2.219 0.029
Classmates 0.027 0.099 0.031 0.271 0.787
Internet 0.064 0.087 0.092 0.739 0.462
Support −0.070 0.110 −0.092 −0.635 0.527
Experience 0.027 0.076 0.038 0.348 0.729
The first sub-sample was composed only by top students. These results prove the
fourth hypothesis to be true in each of its part:
4a. Perceived Usefulness has no significant effect (p-value = 0.182) on top students’
performance.
4b. Top Students do not perceive a comparative advantage of technology in rela-
tionship to books (Advantage coeff. = −0.490, p-value = 0.000).
4c. Teachers’ encouragement has a positive (coeff. = 0.247) significant
(p-value = 0.029) effect on top students’ performance.
The same regression was run on the second sub-sample, composed by low
performing students:
Results show that the fifth hypothesis holds true in each of its part:
5a. Perceived usefulness has a positive (coeff. = 0.182) significant (p-value = 0.047)
effect on performance of bad students.
5b. Perceived Advantage of technology has a negative (coeff. = −0.182) significant
(p-value = 0.042) effect on performance of bad students.
5c. Teachers’ encouragement has a positive (coeff. = 0.218) significant
(p-value = 0.007) effect on bad students’ performance.
5d. Previous experience in the use of technology has a positive (coeff. = 0.127) but
marginally significant (p-value = 0.072) effect on bad students’ performance.
Hypothesis 5b. implies that low performing students perceive the technology to
be useful, but they do not feel a real advantage of technology compared to books.
The key role of teachers’ encouragement is clearly confirmed. Since teachers’
encouragement has been proven to be a key variable in almost every analysis
conducted until now, the original sample was divided into two sub-samples on the
base of teachers’ encouragement perception:
(continued)
g Means Significance
Comparative advantage 1 2.887 0.000
2 2.594
Classmates encouragement 1 3.928 0.000
2 2.788
(continued)
Model Unstandardized Standardized coefficients t-ratio p-value
coefficients
B Std. Error Beta
Experience 0.098 0.077 0.072 1.262 0.208
Intention 0.208 0.105 0.152 1.982 0.048
Potential −0.078 0.074 −0.065 −1.057 0.291
Self-Efficacy −0.185 0.090 −0.130 −2.056 0.041
References
1. Avvisati, F., Hennessy, S., Kozma, R.B., Vincent-Lancrin, S.: Review of the Italian Strategy
for Digital Schools, OECD Education Working Papers, 90 OECD Publishing. http://dx.doi.
org/10.1787/5k487ntdbr44-en (2013)
2. Dejaeger, K., Goethals, F., Giangreco, A., Mola, L., Baesens, B.: Gaining insight into student
satisfaction using comprehensible data mining techniques. Eur. J. Oper. Res. 218(2), 548–562
(2012)
3. North-Samardzic, A., Braccini, A.M., Spagnoletti, P., Za, S.: Applying media synchronicity
theory to distance learning in virtual worlds : a design science approach. Int. J. Innov. Learn.
15(3), 328–346 (2014)
176 L. Caporarello et al.
4. Spagnoletti, P., Resca, A.: A design theory for IT supporting online communities. In:
Proceedings of the 45th Hawaii International Conference on System Sciences, pp. 4082–4091
(2012)
5. Sorrentino, M., De Marco, M.: Implementing e-government in hard times: when the past is
wildly at variance with the future. Inf. Polity 18(4), 331–342 (2013)
6. Mosconi, E.M., Silvestri, C., Poponi, S., Braccini, A.M.: Public policy innovation in distance
and on-line learning: reflections on the italian case. In: Spagnoletti, P. (ed.) Organizational
Change And Information Systems—Working and Living Together in New Ways, pp. 381–
389. Springer, Berlin (2013)
7. Ruggieri, A., Mosconi, E.M., Poponi, S., Braccini, A.M.: Strategies and policies to avoid
digital divide: the italian case in the european landscape. In: Mola, L., Pennarola, F., Za, S.
(eds.) From Information to Smart Society—Environment. Springer, Politics and Economics
(2014)
8. Brynjolfsson, E.: The productivity paradox of information technology. Commun. ACM 36
(12), 66–77 (1993)
9. Devaraj, S., Kohli, R.: Performance impacts of information technology: Is actual usage the
missing link? Manage. Sci. 49(3), 273–289 (2003)
10. Venkatesh, V., Goyal, S.: Expectation disconfirmation and technology adoption—polynomial
modeling and response surface analysis. MIS Q. 34(2), 281–303 (2010)
11. Venkatesh, V., Davis, F.D.: A theoretical extension of the technology acceptance model—4
longitudinal field studies. Manage. Sci., Informs. 46(2), 186–204 (2000)
12. Venkatesh, V.: Determinants of perceived ease of use, Integrating Control, Intrinsic
Motivation and Emotion into the TAM. Inf. Syst. Res. Informs. 11(4), 342–365 (2000)
13. Venkatesh, V., Morris, M.G., Davis, G.B., Davis, F.D.: User acceptance of information
technology—toward a unified view. MIS Q. 27(3), 425–478 (2003)
14. Venkatesh, V., Bala, H.: Technology acceptance model 3 and a research agenda on
interventions. Decis. Sci. 39(2), (2008)
15. Venkatesh, V., Thong, J.Y.L., Xu, X.: Consumer acceptance and use of information
technology—extending the unified theory of acceptance and use of technology. MIS Q. 36(1),
157–178 (2012)
16. Davis, F.D.: Perceived usefulness, perceived ease of use, and user acceptance of information
technology. MIS Q. 13(3), 319–340 (1989)
17. Adams, D.A., Nelson, R.R., Todd, P.A.: Perceived usefulness, ease of use, and usage of
information technology: a replication. MIS Q. 16, 227–247 (1992)
18. Hendrickson, A.R., Massey, P.D., Cronan, T.P.: On the test-retest reliability of perceived
usefulness and perceived ease of use scales. MIS Q. 17, 227–230 (1993)
19. Segars, A.H., Grover, V.: Re-examining perceived ease of use and usefulness: a confirmatory
factor analysis. MIS Q. 17, 517–525 (1993)
20. Subramanian, G.H.: A replication of perceived usefulness and perceived ease of use
measurement. Decis. Sci. 25(5/6), 863–873 (1994)
21. Szajna, B.: Software evaluation and choice: predictive evaluation of the Technology
Acceptance Instrument. MIS Q. 18(3), 319–324 (1994)
22. Brown, S.A., Venkatesh, V.: Model of adoption of technology in households: a baseline model
test and extension incorporating household life cycle. MIS Q. 29(3), 399–426 (2005)
23. Davis, F.D., Bagozzi, R.P., Warshaw, P.R.: Extrinsic and intrinsic motivation to use
computers in the workplace. J. Appl. Soc. Psychol. 22, 1111–1132 (1992)
24. Venkatesh, V.: Creation of favorable user perceptions- exploring the role of intrinsic
motivation. MIS Q. 23(2), (1999)
25. Nicholson, J., Nicholson, D., Valacich, J.S.: Examining the effects of technology attributes on
learning—A contingency perspective. J. Inf. Technol. Educ. 7 (2008)
26. Hu, P.J., Hui, W.: Examining the role of learning engagement in technology-mediated learning
and its effects on learning effectiveness and satisfaction. Decis. Support Syst. 53, 782–792
(2012)
When Teachers Support Students in Technology Mediated Learning 177
27. Bretz, R.D., Judge, T.A.: Realistic job previews: a test of the adverse self-selection hypothesis.
J. Appl. Psychol. 83, 330–337 (1998)
28. Taylor, S., Todd, P.A.: Understanding information technology usage: a test of competing
models. Inf. Syst. Res. 6(2), 144–176 (1995)
29. Bergeron, F., Rivard, S., Serre, L.: Investigating the support role of the information center.
MIS Q. 14(3), 247–260 (1990)
30. Cragg, P., King, M.: Small-firm computing: motivators and inhibitors. MIS Q. 17(1), 47–60
(1993)
31. Harrison, D.A., Mykytyn, P.P., Riemenschneider, C.K.: Executive decisions about adoption of
information technology in small business: theory and empirical tests. Inf. Syst. Res. 8(2), 171–
195 (1997)
32. Venkatesh, V., Davis, F.D.: A model of the antecedents of perceived ease of use: development
and test. Decis. Sci. 27, 451–481 (1996)
33. Karahanna, E., Straub, D.W., Chervany, N.L.: Information technology adoption across time: a
cross-sectional comparison of pre-adoption and post- adoption beliefs. MIS Q. 23, 183–213
(1999)
34. Bharracherjee, A., Premkumar, G.: Understanding changes in belief and attitude toward
information technology usage: a theoretical model and longitudinal test. MIS Q. 28, 229–254
(2004)
35. Bhattacherjee, A.: Understanding information systems continuance: an
expectation-confirmation model. MIS Q. 25, 351–370 (2001)
36. Rai, A., Lang, S., Welker, R.: Assessing the validity of IS success models: an empirical test
and theoretical analysis. Inf. Syst. Res. 13, 50–69 (2002)
37. Delone, W.H., McLean, E.R.: The DeLone and McLean model of information systems
success: a ten year update. J. Manage. Inf. Syst. 19(4), 60–95 (2003)
38. Alavi, M., Leidner, D.E.: Research commentary: technology-mediated learning—a call for
greater depth and breadth of research. Inf. Syst. Res. 12(1), 1–10 (2001)
39. Casalino, N., Buonocore, F., Rossignoli, C., Ricciardi, F.: Transparency, openness and
knowledge sharing for rebuilding and strengthening government institutions. In: IASTED
Multiconferences-Proceedings of the IASTED International Conference on Web-Based
Education, WBE 2013, pp. 866–871 (2013)
40. Casalino, N., Buonocore, F., Rossignoli, C., Ricciardi, F.: Transparency, openness and
knowledge sharing for rebuilding and strengthening government institutions. In: IASTED
Multiconferences-Proceedings of the IASTED International Conference on Web-Based
Education, WBE 2013, pp. 866–871 (2013)
41. Zardini, A., Mola, L., vom Brocke, J., Rossignoli, C.: The shadow of ECM: the hidden side of
decision processes. In: Respício, A., Adam, F., Phillips-Wren, G., Teixeira, C., Telhada,
J. (eds.) Bridging the Socio-technical Gap in Decision Support Systems, 212, pp. 3–12, IOS
Press, Amsterdam, Holland (2010)
How Do Academic Spin-off Companies
Generate and Disseminate Useful Market
Information Within Their Organizational
Boundaries?
1 Introduction
difficulties that such firms face in designing and implementing appropriate policies,
routines and organizational arrangements that are necessary to convert technolog-
ical knowledge into successful products and, therefore, in the commercial exploi-
tation of technological competences [3]. Indeed, in high-tech environments, firms
erroneously are used to believe in the superiority of their technological solutions,
such that the quality of their high-tech products should be sufficient to encourage
customers to prefer and acquire their products with respect to those of their
competitors.
In this sense, small high-tech firms suffer of different types of “myopia”, which
bring them to suppose that: (a) their technologies are radically new and do not face
any competition; (b) technologies commercialized by competitors do not represent a
great threat; (c) competitors operate in different sectors and their strategies do not
have any relevant impact on their businesses [3].
By contrast, in order to convert a potential technological superiority into a
competitive advantage, high-tech firms should be able to integrate their techno-
logical capabilities with adequate marketing capabilities that are needed to under-
stand customers’ needs, to assess competitors’ assets and to realize effective
strategic actions. In other words, they should adopt and implement a necessary
market orientation [4–6]. The definition and implementation of these monitoring
and analytical activities, however, requires the development of specialized
resources, competences and capabilities that are focused on the generation, diffu-
sion within the firm’s boundaries, and integration among the firm’s functions of
relevant market information [6], that is, information related to key customers,
competitors and other market forces.
Against this background, the aim of this study is to assess whether and how
small high-tech firms engage in the articulated activities of information generation,
diffusion and integration, and which factors make the definition and the imple-
mentation of such activities difficult. To perform this analysis, we focus on the
specific category of small high-tech firms represented by academic spin-off firms.
As a matter of facts, exactly because academic spin-off firms originate from research
conducted within the universities, many entrepreneurs are more focused on the
technological/technical aspects of their innovations than on commercial aspects
[7, 8]. Thus, gathering and analyzing necessary market information are particularly
critical tasks for these firms, exactly because they operate in high-tech markets
characterized by uncertain environmental conditions, and thus they need to excel
not only at generating new innovations, but also at commercializing such innova-
tive solutions.
In order to address our research question, we perform in-depth interviews with
academic spin-off managers. We thus assess both how information related to cus-
tomers, competitors and other market forces is collected, examined, integrated,
disseminated and employed by them to make marketing decisions, and also which
obstacles impede a wider implementation of these activities among such a category
of firms.
How Do Academic Spin-off Companies Generate and Disseminate … 181
2 Theoretical Background
Academic spin-off firms represent a concrete answer to the desire to exploit and
transfer technological knowledge, grounded in scientific discoveries and explora-
tions, which can consequently be incorporated in new products and services [9].
More specifically, an academic spin-off is a new company “that is formed by a
faculty, staff member, or doctoral student who left the university or research
organization to found the company or start the company while still affiliated with
the university, and/or a core technology (or idea) that is transferred from the parent
organization” [10]. Several contributions of managerial literature have examined
different dimensions of academic spin-off companies: abilities, competences,
motivational and personal characteristics of academic entrepreneurs and/or team
formation, underlying mainly their high propensity to independence and their low
need for affiliation [11–14]; real impact of universities’ policies and procedures on
commercialization activities [15]; external factors (e.g., knowledge infrastructures,
national legislation, venture capital, etc.) that impact on both university’s and
spin-off’s activities [16]; and, finally, growth and business performance of aca-
demic spin-offs [17, 18]. Relatively to the latter, scholars have highlighted that
academic spin-offs, similarly to other high technology companies, exhibit a low rate
of growth in terms of sales, cash flows and employees. They also show a lower
likelihood to obtain profits [1, 2].
Among the reasons of the problems related to their growth and their competi-
tiveness, two causes have been identified: (a) the emphasis on technological aspects
of their innovations generated within universities, and (b) the lack of a managerial
approach to generate, analyse and disseminate necessary information related to
external market forces beyond the general confusion on the role of marketing in
their organizations. Therefore, academic spin-off companies have several difficul-
ties in assuming, developing and implementing effective marketing strategies,
policies and tools that are effectively necessary to identify their profitable market
segments, to commercialize their innovative high-tech products/services, to prop-
erly position their value propositions, to outperform their competitors and, finally,
to maximize the rate of success [3]. Overall, previous studies have thus revealed
that academic spin-off companies need to go beyond their technology innovations
and have to develop a market orientation that concretely implies “gathering,
sharing and using information about market (customers, competitors, collaborators,
technology, trends, etc.) to make decisions that lead to the creation of superior
customer value” [3].
The concept of market orientation has received a growing interest by scholars,
who have substantially debated its theoretical and practical implications [6, 19, 20].
Although the streams of research literature on this field are diverse [6, 20], in this
study Kohli and Jaworsky’s conceptualization of market orientation is adopted,
defined as the “organization-wide generation of market intelligence pertaining to
customers, competitors, and forces affecting them, internal dissemination of the
intelligence, and reactive as well as proactive responsiveness to the intelligence”
182 T. Abbate and F. Cesaroni
3 Research Design
study [27, p. 29]. Furthermore, the evidence from this approach “is often considered
more compelling” [26, p. 45].
The selected cases were four academic spin-off companies that operate in dif-
ferent sectors (e.g., ICT services and materials engineering) and are located in two
different Countries (Italy and Spain). Although they have been created between
2002 and 2006, selected spin-offs are still small firms, with 5–15 employees on
average. Companies, founded after 2006, have not been included in this study,
because some relevant aspects characterizing market orientation and its main
dimensions are not investigable in such young companies.
The data were accurately gathered with multiple methods by utilizing a trian-
gulated research strategy, which implies the use of different type of materials,
methods and investigators in the same study [28]. The primary data source consists
of in-depth face-to-face interviews with entrepreneurs and/or marketing managers
of four firms to explore specific aspects related to the market orientation construct,
such as the motivation that drove the implementation or not implementation of a
market orientation, the difficulties linked principally to definition and development
of its main dimensions and, finally, the relationship between market orientation and
business performance.
We performed semi-structured interviews with entrepreneurs and marketing
managers because in these companies they are key informants and respondents for
the reason that they have detailed information about of companies’ operations and
conditions [29]. Each manager received an email explaining the general purpose of
the study. We realized four interviews from March to April 2014, and each interview
lasted approximately 2 h. Interviews were conducted following to the traditional
methodological prescriptions on data collection through personal interviews [30].
To complement primary data, we collected information from secondary sources,
mainly Internet documents, such as publicly available information from company
web sites, reports on the firms’ business activities, history cases, observations,
official documents, published interviews.
4 Results
Our analysis allows highlighting critical aspects concerning how academic spin-off
companies adopt a market orientation and also how they generate, disseminate and
integrate information on customers’ needs, competitors’ strategies and the com-
petitive environment within their organizations for use in making marketing
decisions.
Firstly, some of the academic entrepreneurs interviewed recognize the impor-
tance of defining, conceptualizing and operationalizing marketing activities oriented
to understand the principal characteristics of their customers, their main require-
ments and their preferences. In this respect, a marketing manager underlines that:
“although in many high-tech environments needs, requirements and expectation of
customers evolve more rapidly over time, the necessity to analyse and to understand
184 T. Abbate and F. Cesaroni
them is critical for developing and commercializing products/services with the right
set of features that meet and satisfy customer needs in a fascinating way. We can
generate novel solutions, characterized by high-quality and excellence in technol-
ogies, but the customers represent ineluctable premise and decree unscrupulously
our destiny in terms of survival and success in the competitive market place”.
In this perspective, academic spin-off companies are trying to develop and to
assume an operational focus on several marketing activities aimed at gathering and
utilizing information about customers’ expressed and latent needs. Furthermore,
they realize the importance of discovering, understanding and pursuing market
opportunities that are not known to competitors.
On the other hand, they are increasingly comprehending that nowadays firms
with a strong technological base have to effectively incorporate customer’s
knowledge into their product development processes, since such an effort can be
useful for the creation of innovations and the commercialization of outcomes of
innovative processes into the type of successful products/services that meet con-
sumer needs and expectations, and delivery value.
In this way, the customer’s role changes by moving from passive recipient of
information flow concerning products and services developed by companies, to
competent and suitable knowledge source that firms can stimulate and involve in
their innovation processes. Therefore, high-tech firms have to conceive and realize
new suitable opportunities for assuming continuous and systematic information
pertaining the skills, competencies and capabilities of their customers because this
appears a key condition to achieve marketplace success. In fact, the participation
and the collaboration of customers can be a strategic way of stimulating creativity
and innovation, and designing synergic outputs (derived from a gradual and
articulated process of interactions among the involved parties). In turn, customers’
involvement in innovative processes may realise several benefits and, primarily,
may allow firms to discover the best innovative solutions to different problems,
which are often too easily expressed. In this respect, an academic entrepreneur
highpoints that “the type of clients that needs frequent contacts for developing
solutions to daily problems and configuring prototypes fostering our creativity has
stimulated advantageous forms of participation in our internal R&D processes and
suitable collaborations”. Also, he stresses that “some of our products are the result
of intensive processes of exchange and collaboration between firm and customer,
which is involved from idea generation and product design to test of prototypes,
permitting to eliminate defects and reduce the risk to failure”.
In these circumstances, customers can cover different phases of innovation
processes. Organizations thus may choose to work with these parties in order to
anticipate the emerging market needs (which usually take a long time before the
mass marketplace realizes their importance), to personalize products for their needs
and, consequently, to face market uncertainty. In turn, academic spin-off companies
can acquire knowledge sources at low cost and accelerate the time-to-market of
their products/technologies in turbulent and more competitive environments. In this
way, they have easily access to the social dimension of customer knowledge and
How Do Academic Spin-off Companies Generate and Disseminate … 185
gradually extend the reach and scope of customers to interact with, thus enhancing
innovation and business performance.
Secondly, some academic spin-offs are beginning to have regular gathering,
analysis and interpretation of information inherent to the adopted market strategies,
the main strengths and the weaknesses of the key players that offer similar products
or products having similar functionalities that intend to capture the same market
demand.
One of the entrepreneurs highlights that “we are focused on the competitor’s
features, their mechanisms and significant tactical activities, their innovation pro-
cesses and their innovation performance in terms of new patents, licenses, tech-
nological platforms/underpinnings. This is because they impact and change the
rules and the logics of the game”. In addition, the same interviewee stresses that:
“although size and dimensions of our companies do not allow us to assuming a
significant role and influencing really the external competitive environment, we
have only a strategic possibility consisting in the identification of market spaces not
explored and engaged by our competitors, by designing focused new propositions
and obtaining positive business performance in long-term”.
By recognizing the relevance of these questions, spin-off firms make efforts
oriented to the gathering, analysis and dissemination of competitor information,
which regard the following aspects: characteristics of their proposals, focusing on
the applications of technological bases, more relevant for upcoming innovations;
availability of resources and competencies that are valuable, difficult for competi-
tors to imitate and, then, explain the obtained advantage positions; cost structures;
capabilities to develop continually technological innovation through which to
maintain their leadership over time; patent portfolios that increase their contractual
power.
Furthermore, an interesting aspect for this type of companies is represented by
the underestimated opportunity of gathering, analysing and sharing information
about indirect rivals and key potential competitors because they often come outside
existing industry boundaries and the competition will be concentrated on product
classes. Thus, the lack of this focus on their goals, resources and capabilities
reduces the possibilities to design and to elaborate all modifications that high-tech
environment necessitates, moving from an attitude merely responsive to markets’
evolutionary phenomena to an anticipatory attitude that requires efforts of fore-
casting events in the competitive environment.
Finally, almost all the academic spin-off companies we have interviewed have
strongly underlined that the activities related to the main dimensions of market
orientation require the availability of a set of suitable resources. More specifically,
these resources are: human resources with specialized competences and capabilities;
economic and financial resources to support investments (e.g., planning and
development of information marketing systems); and technical/technological
resources for the systematic generation, integration and elaboration of customer
intelligence and competitor intelligence within and across people and department of
the organization’s boundaries.
186 T. Abbate and F. Cesaroni
5 Conclusion
Our analysis shows that the generation, dissemination and integration of informa-
tion on customers’ needs and requirements, competitors’ strategies and actions, and
other market forces are relevant activities for academic spin-off companies. These
companies should recognize market orientation as a key driver of market infor-
mation processing activity and incorporate it within their innovation processes.
Thus, academic spin-off firms should acquire, collect, and disseminate information,
How Do Academic Spin-off Companies Generate and Disseminate … 187
References
1. Zhang, J.: The performance of university spin-offs: an exploratory analysis using venture
capital data. J. Technol. Transfer 34, 255–285 (2009)
2. Ortìn-Angel, P., Vendrell-Herreo, F.: University spin-offs vs. other NTBFs: total factor
productivity differences at outset and evolution. Technovation 34(2), 101–112 (2013)
3. Mohr, J., Sengupta, S., Slater, S.: Marketing of high-technology products and innovations. 3rd
ed. Pearson Education, Inc., Upper Saddle River, New Jersey (2010)
4. Dutta, S., Narasimhan, O., Rajiv, S.: Success in high-technology markets: is marketing
capability critical? Marketing Science 18(4), 547–568 (1999)
5. Baker, W.E., Sinkula, J.M.: The complementary effects of market orientation and
entrepreneurial orientation on profitability in small businesses. J. Small Bus. Manage. 47(4),
443–464 (2009)
6. Kohli, A.K., Jaworski, B.J.: Market orientation: the construct, research proposition, and
managerial implications. J. Mark. 54(2), 1–18 (1990)
7. Lockett, A., Wright, M., Franklin, S.: Technology transfer and universities’ spinout strategies.
Small Bus. Econ. 20, 185–200 (2003)
8. Wright, M., Birley, S., Mosey, S.: Entrepreneurship and university technology transfer.
J. Technol. Transfer 29(3–4), 235–246 (2004)
9. Clarysse, B. Wright M., van de Velde, E.: Entrepreneurial origin, technological knowledge,
and the growth of spin-off companies. J. Manage. Stud. 48(6), 1420–1442 (2011)
10. Steffensen, M., Rogers, E.M., Speakman, K.: Spin-offs from Research Centers at a Research
University. J. Bus. Ventur. 15(1), 93–111 (1999)
188 T. Abbate and F. Cesaroni
11. Roberts, E.B., Malone, D.E.: Policies and structures for spinning off new companies from
Research and Development Organizations. R&D Manage. 26(1), 17–48 (1996)
12. Franklin, S.J., Wright, M., Lockett, A.: Academic and surrogate entrepreneurs in university
spin-out companies. J. Technol. Transfer 26(1–2), 127–141 (2001)
13. Clarysse, B., Moray, N.: A process study of entrepreneurial team formation: the case of a
research based spin-off. J. Bus. Ventur. 19(1), 55–79 (2004)
14. O’Shea, R., Allen, T.J., Chevalier, A., Roche, F.: Entrepreneurial orientation, technology
transfer and spinoff performance of U.S. Universities. Res. Policy 34(7), 994–1009 (2005)
15. Siegel, D.S., Waldman, D.A., Atwater, L.E., Link, A.N.: Toward a model of the effective
transfer of scientific knowledge from academicians to practitioners: qualitative evidence from
the commercialization of university technologies. J. Eng. Tech. Manage. 21(1/2), 115–142
(2004)
16. Wright, M., Clarysse, B., Lockett, A., Binks, M.: Venture capital and university spin-outs.
Res. Policy 35, 481–501 (2006)
17. Mustar, P., Renault, M., Colombo, M.G., Piva, E., Fontes, M., Lockett, A., Wright, M.,
Clarysse, B., Moray, N.: Conceptualising the heterogeneity of research-based spin-offs: a
multi-dimensional taxonomy. Res. Policy 35(2), 289–308 (2006)
18. Ensley, M.D., Hmieleski, K.M.: A comparative study of new venture top management team
composition, dynamics and performance between university-based and independent start-ups.
Res. Policy 34, 1091–1105 (2005)
19. Shapiro, B.P.: What the hell is market oriented? Harvard Bus. Rev. 66, 119–125 (1988)
20. Narver, J.C., Slater, S.F.: The effect of a market orientation on business profitability. J. Mark.
54(4), 20–35 (1990)
21. Day, G.S.: The Capabilities of market-driven organization. J. Mark. 58(4), 37–52 (1994)
22. Kumar, V., E. Jones, Venkatesan, R., Leone, R. P.: Is market orientation a source of
sustainable competitive advantage or simply the cost of competing? J. Mark. 75(1), 16–30
(2011)
23. Jaworski, B.J., Kohli, A.K: Market orientation: antecedents and consequences. J. Mark. 57(3),
53–70 (1993)
24. Kirca, A.H., Jayachandran, S., Bearden, W.O.: Market orientation: a meta-analytic review and
assessment of its antecedent and impact on performance. J. Mark. 69, 24–41 (2005)
25. Ellis, P.: Market orientation and performance: a meta-analysis and cross-national comparisons.
J. Manage. Stud. 43, 1089–1107 (2006)
26. Yin, R.K.: Case study research: design and methods. Sage, Thousand Oaks, CA (2003)
27. Miles, M.B., Huberman, A.M.: Qualitative data analysis: an expanded sourcebook. Sage
Publications, Thousand Oaks, CA (1994)
28. Denzin, N.K.: The research act: a theoretical introduction to sociological methods.
McGraw-Hill, New York (1978)
29. Deshpande, R., Farley, J.U.: Organizational culture, market orientation, innovativeness, and
firm performance: an international research odyssey. Int. J. Res. Mark. 21(1), 3–22 (2004)
30. Lee, T.: Using qualitative methods in organizational research. Sage, Thousand Oaks, CA
(1999)
A Two Step Procedure for Integrated
Inventory—Supply Chain Management
Information Systems
Keywords Integrated inventory management Distribution systems Customer
satisfaction Role of information sharing in the integration functions
1 Introduction
Few years ago, in a more and more competitive and aggressive market, companies
developed the logistic function, devoted to manage the flows of information and
goods in the logistic system, in order to improve the customer service level and
control the logistic costs. Nowadays, these companies have to reorganise their
logistic and supply chain management systems in order to meet changes and
flexibility, and, above all, guarantee a high level of service as a key factor for being
competitive. Information sharing in the whole chain is a key factor for meeting
flexibility. Moreover, the introduction of the Electronic Commerce (EC) has
induced changes and problems arising in distribution channels, that are completely
new and impact on the increasing customer service expectations [4].
A review of supply chain management operations in a multi-channel distribution
with EC channel is presented in [1], where different managerial planning tasks for
the activities involved at each level of the supply chain are reported, together with
the corresponding quantitative models; some strategies for the inventory manage-
ment are also described. In [5] a survey on supply chain management literature that
focuses on the innovative measures of Quick Response (QR) is presented.
High quality services and cost minimization are imperative goals for competitive
supply chains [6]. All over the world customers pay more and more attention to the
intangible value of products. Moreover, customers ask that distribution costs should
not negatively impact on the price of the products.
Distribution activities connected to the customer service level and the logistic
costs are, among others, the orders’ management, the inventory and storage man-
agement, the material handling, and the transportation of goods. Related to the
above logistic activities, companies face logistic costs involving: transportation
costs, warehousing and inventory costs, and stock out costs related to impossibility
of completely satisfying the demand.
Companies should redesign the optimal inventories allocation in the distribution
system in such a way to avoid an uncontrolled growth of costs and the presence of
overstocks in the warehouses for maintain enough inventories to satisfy customers’
demand; in particular, as stressed in [5], the fundamental task is to balance the stock
levels at the top and the bottom echelons. In [16] and, more recently, in [14] control
rules for minimizing unbalanced stock levels are proposed. In a recent paper [17]
three different inventory strategies for one manufacturer, one retailer supply chain
with both a traditional channel and a e-channel are compared.
Motivated by the above considerations, in this work we devote our attention to
the integration of inventory and distribution management functions in a
multi-echelon, multi-channel distribution system with the main aim of balancing
stock levels in the whole network. Some real multi echelon distribution systems are
described in [2].
Articles dealing with similar problems generally concern simple networks (i.e.
tree systems with 2 levels or n-echelon serial systems) in which demand points are
usually in the last level of the network. Inventories are often included only in the
facilities operating at the lower level of the network, that is at the peripheral depots.
Many papers dealing with integrated inventory management have as objective
function the minimization of the distribution costs and take as decision variables the
order points of each facility in the network.
A Two Step Procedure for Integrated Inventory … 191
The supply chain network under investigation is a multi echelon, multi channel
distribution system, in which there is a flow of final products from the plants (where
they are produced) to the demand points, generally called customers. The network
is characterized by the presence of central depots (D), peripheral depots (P) and
suppliers, that in turn are split into clients, that is wholesalers (C), and big clients,
that is distributors or retailers (B). In the following, plants are not included in the
analysis since central depots play the role of supply points of the network.
The following channels for supplying goods are considered:
• a traditional channel, where peripheral depots (supplied by the central ones)
serve customers;
• a direct channel, for serving big clients characterized by large demand, thus
served directly by the central depots.
As an example of such logistic network, let us report in Fig. 1 a simple distri-
bution system with 2 central depots (D), 3 peripheral depots (P), 4 big clients
(B) and a set (C) of other customers. Note that links (arrows) in the network
represent the flow of goods from depots to clients and from central depots to
peripheral ones. Such links are usually predefined but, as we will see in the next
section, we consider the possibility of changing the given flow assignment in order
to get balanced stock levels.
We assume that balanced stock levels imply the same inventory level at each
peripheral depot, for each product, in terms of number of days of stock, while a
higher stock is maintained at the central depots.
Referring to Fig. 1, the central depots (D1 and D2) serve directly the peripheral
depots (P1, P2 and P3) and the big-clients (B1, …B4). Inventories are stocked both
at the central and peripheral depots. The assignment of the peripheral depots and
big clients to the central depots is known, as well as the assignment of the clients to
the peripheral depots.
D1 D2
P1 P2
P3
B1 C B2 B3 B4
C
Having in mind the above distribution system, assuming a time horizon T split
into t homogeneous periods (T = {1, 2, …, t}), and given the customers’ demand for
each time period t, the problem is to determine the optimal flow of goods in the
network and the inventory levels to maintain at each central and peripheral depot
for each time period t 2 T; this implies to decide the emission order time and the
quantity that each depot has to order. The capacity of depots and customer service
level constraints have to be satisfied.
The objective is the minimization of ordering, inventory, stock out and trans-
portation costs.
We assume that the customer service level, that represents one important
parameter for checking the performances of the distribution system, is expressed as
percentage of fulfilled demand.
Moreover, we assume to operate in a centralised control system based on global
information. The centralized control allows changes in the inventory policy by
modifying the flows of goods in the network in order to avoid stock out.
Our inventory policy is based on a periodic (daily) review policy in which goods
are ordered when inventories are under a given level, the so-called ordering point;
the quantity to order is defined for restoring inventories while minimising the
logistic costs and depends on the existing stock in the whole system and, conse-
quently, on the used inventory strategy.
Some stock controls are used for finding the best inventory strategy to use
among a base stock policy, a rationing strategy as proposed in [11] and a basic
stock policy modification as suggested in [3].
The proposed two phase algorithm for solving the problem described above is now
presented.
In the first phase, taking into account the logistic network under consideration
and the existing assignment of the peripheral depots (P) and big clients (B) to the
central depots (D), we decompose the problem into |D| sub problems, thus we
define the optimal flows and inventory policy by solving a Mixed Integer Linear
Programming model for each central depot of the network and its sub-network. In
this phase the amount of information available is considered in the definition of the
echelon stock level and echelon inventory position at the central depots.
In the successive phase, denoted “integration” phase, we determine the “current
stock situation” of the whole network, thus identifying the best transferring policy
for managing the flow of goods and granting the higher possible customer service
level. Note that in this phase an information system able to provide to all central
depots of the network real time information related to the exact stock and inventory
position of peripheral depots is a crucial element.
After checking if the inventory and distribution policies obtained by solving the
|D| MILP models are adequate with respect to the overall current stock situation,
194 D. Ambrosino and A. Sciomachen
different instruments for managing the flows of goods in the network and main-
taining balanced stock levels in all depots are used. In particular, first the flows are
defined by solving |D| MILP models (base stock policy), otherwise the current
assignment of peripheral depots (P) and big clients (B) to central depots (D) can be
discussed and, finally different stock policies (i.e. the basic stock policy modifi-
cation and the rationing strategy) can be used.
Let us describe in more detail the two phases of the proposed solution approach.
Note that, when describing the following procedure we will refer to a representative
product; anyway, the model and the other step of the procedure can be extended to
include multi-products (i.e. by defining different stock levels for each product and
so on).
In this phase, referring to a time horizon T, we define the optimal flows in the
considered network and the inventory level for each stock point (i.e. for each D and P).
Before presenting the model, let us give the required notation.
For each central and peripheral depot j, 8j 2 D [ P, the following quantities are
known:
lj lead time for depot j;
kj capacity of depot j;
sj service level of depot j;
ojt order point of depot j at period t, 8 t 2 T;
coj fixed ordering cost of depot j;
cwj warehousing cost (for unit of inventory and for unit of time) of depot j;
csj stock out cost (for unit of demand and for unit of time) of depot j;
Ij0 the stock level of depot j at the beginning of the time horizon;
Qjtlj the quantity ordered by depot j in the previous |lj| periods of time, with
respect to the beginning of the time horizon.
Moreover, for each period of time t, for each big client and for each peripheral
depot i, 8i 2 B [P; 8 t 2 T, are known:
dit demand of big client/peripheral depot i, in period t;
ctdt transportation cost from central depot d to big client/peripheral depot
i,8 d 2 D;
cdi the assignment of big client/peripheral depot i to central depot d, 8 d 2 D
(i.e. cdi ¼ 1 if i is assigned to central depot d, 0 otherwise).
A Two Step Procedure for Integrated Inventory … 195
The decisions, in each time period t, are related to: the ordered quantity and
stock out of each depot:
Qjt 0 ordered quantity of depot j, in time period t, 8 j 2 D[ P; 8t 2 T;
βdit ≥ 0 stock out of central depot d with respect to big client i, in time period t,
8d 2 D; 8i 2 B; 8t 2 T;
βjt ≥ 0 stock out of peripheral depot j, in time period t, 8 j 2 P; 8 t 2 T.
When a depot orders a positive quantity Qjt > 0, it has to pay a fixed ordering
cost and the following binary decision variables are needed:
(
1 if depot j issues an order in time period t
yjt ¼ 8j 2 D[P; 8t 2 T;
0 otherwise
the echelon stock level and echelon inventory position of each central depot:
Idtech 0 echelon stock level of central depot d in time period t, 8d 2 D; 8t 2 T;
IPechdt 0
echelon inventory position at central depot d in time period t,
8d 2 D; 8t 2 T:
The proposed Integrated Inventory Management (IIM) model can be now given
as follows. Min
X X XX XX XX
coj yjt þ cwd Idtech þ cwj Ijt þ csj bjt þ
t2T j2D[P t2T d2D t2T j2P t2T j2P
XX X XX XX X ð1Þ
csd bdit þ ctdj Qjtlj þ ctdi ðdit bdit Þ
t2T d2D i2B t2T j2P t2T d2D i2B
subject to
X
Idt þ cdj ðIjt þ Qjt Þ ¼ Idtech 8j 2 D; 8t 2 T ð5Þ
j2P
X X
Idtech þ Qdt cdj bjt cdj bdit ¼ IPech
dt 8j 2 D; 8t 2 T ð6Þ
j2P i2B
dt odt
IPech 8j 2 D; 8t 2 T ð8Þ
djt bjt
sj 8j 2 P; 8t 2 T ð11Þ
djt
ðdit bdit Þ
sd 8d 2 D; 8b 2 B : cdj ¼ 1; 8t 2 T ð12Þ
dit
(1) is the objective function of the proposed model minimizing the four main
cost components of our problem that is the ordering, warehousing, stock out and
travelling costs.
Constraints (2) set binary variables yjt to 1 if a positive quantity Qjt is ordered in
a depot of the network. Equations (3) and (4) define, for each time period t, the
stock level of the central and peripheral depots, respectively. Note that the main
difference between (3) and (4) is due to the fact that central depots play a dual rule;
in particular, as it has been already said, central depots have to serve both customers
and peripheral depots.
Equations (5) and (6) define the echelon inventory position of the central depots,
while (7) are related to the inventory position of the peripheral ones.
Constraints (8) and (9) concern the control of the stock level to maintain at each
stock point, and force the inventory echelon position and the inventory position to
be greater or equal than the established order point for central and peripheral depots,
respectively.
(10) are the capacity constraints of the stock points of the network.
Finally, (11) and (12) are the customer service level constraints and impose that
the percentage of satisfied demand of the depots and big clients must be greater or
equal to a given predefined quantity expressed by the service level.
This model implies that information sharing is performed in the network.
Otherwise, if no information are available, in the model only the stock level can be
used instead of echelon stock and echelon inventory position.
A Two Step Procedure for Integrated Inventory … 197
This phase is aimed at verifying whether the solution of model (1)–(12), defining
the optimal inventory allocation in the whole distribution system, is consistent with
the current stock level in the whole network and thus, at defining the best inventory
strategy according to the current global stock level, with the goal of avoiding
unbalanced inventories at the different echelons. In fact, note that even a shortage in
a part of the network may need to modify the optimal inventory allocation in the
whole distribution system.
At the end of phase1 the inventory manager of the network knows the global
amount of goods that have to leave each central depot for serving peripheral depots
and big clients. These quantities represent the out flow of these depots and are the
result of a base stock policy obtained by solving model IIM.
The following main steps describe phase2, called the integration phase.
Step 1: identification of local stock out for each central depot.
If the existing stock level is greater than the out flow, the base stock policy
is used.
Else: the inventory policy is re-determined by solving model IIM for the
whole network, i.e. in the new model the assignment of peripheral depots
and big clients to the central depots is a decision to take (we will refer to
this new model as IIM-A).
At the end of Step 1, the inventory manager of the network knows the
quantities to transfer from each central depot to the peripheral depots and
to big clients assigned to it. These quantities represent the out flow of
central depots and are the result of a base stock policy with new
assignments. The new assignments guarantee a better distribution of
goods in the network.
Step 2: identification of possible global stock out in the network.
If the global amount of inventories existing at the top level of the network
is sufficient to meet the demand of the whole network (total out flow), the
base stock policy with the new assignment is used.
Else: a modification of the basic stock policy [3] is used and a notification
of the existing stock level is sent to the production function.
The basic stock policy modification is obtained by reducing the order point
of each depot, that is by solving the model IIM-A in which a “minimum”
order point (ominjt) (we will refer to this new model as IIM-A(omin))
In this way, the demand of peripheral depots decreases while inventories
are taken at the top echelon of the network.
Step 3: identification of emergency global stock out in the network.
If the global amount of inventory existing at the top level of the network is
enough for satisfying the requirements of the whole network resulting by
the solution of IIM-A(omin), the basic stock policy modification is used.
198 D. Ambrosino and A. Sciomachen
Else a rationing policy [11] is used, that is each peripheral depot with a
positive demand in the time period under investigation will receive a
quantity defined in such a way that each depot has the same days of
coverage (balanced distribution).
We use the solution approach described above for solving distribution and inven-
tory problems on different networks and for evaluating new distribution strategies.
The experimental tests are based on a distribution network made up of 2 D, 10 P,
30 B, and 100 C already assigned to P. The time horizon is three weeks split into
time periods of one day. The feasible flows of goods in the network have been
described in Fig. 1. The demand of the customers of the network presents a constant
trend during the considered time horizon, and the demand of big clients
B represents the 20 % of the global demand of the network.
We simulate different scenarios by assuming different initial stock situations and
customers’ demands. In particular, referring to the initial stock situation we con-
sider a standard scenarios (St.S.), in which the initial stock situation is coherent with
the demand of the network, and a critical scenarios (Cr.S.), in which the initial stock
situation is not enough for satisfying the demand of the network.
For the considered scenarios we compare costs and inventory levels obtained by
using two above mentioned concepts of inventory in the MILP model (i.e. the
inventory I. and the echelon stock Ech.).
In Fig. 2 some graphs are reported. These graphs are related to the partition of
the logistic costs in the different cases analyzed. In the last row the total logistic
costs are indicated. These costs are obtained by solving the MILP models.
It can be noted that when referring to the echelon stock concept the warehouses
costs are lower than in the case of the inventory concept is used, while the ordering
costs have an opposite trend.
Fig. 3 Comparison of
inventory levels at central
depots
Another difference concerns the stock levels maintained at central depots during
the time horizon, as reported in Fig. 3. We noted that, when referring to echelon
stocks, that is in the case of integrated inventory management obtained thanks to
information sharing, costs decrease on average of 7 %.
Referring to the customers’ demand we consider 2 other scenarios that differs for
the % of demand of customers C (i.e. served by P) and of big clients B (i.e. served
directly by D). Starting from a standard initial situation (St.S.), using the echelon
stock concept (Ech.), in Fig. 4 the graphs compare the results obtained in the
following cases: 100 %C–0 %B, 80 %C–20 %B and 60 %C–40 %B. The greater
presence of big clients B in the network involves higher ordering, warehouse costs
for central depots D, while the total ordering and warehouses costs are lower. Also
the total transportation costs decrease.
Fig. 4 Comparison of ordering, inventory, transportation, stock out, costs in case of different
partitions of customers between B and C
200 D. Ambrosino and A. Sciomachen
References
1. Agatz, N.A.H., Fleischmann, M., van Nunen, J.A.E.E.: E-fulfillment and multi-channel
distribution—a review. Eur. J. Oper. Res. 187, 339–356 (2008)
2. Ambrosino, D., Scutellà, M.G.: Distribution network design: new problems and related
models. Eur. J. Oper. Res. 165, 610–624 (2005)
3. Chen, F.: Optimal policies for multi-echelon inventory problems with batch ordering. Oper.
Res. 48(3), 376–389 (2000)
4. Chiang, W.K., Monahan, G.E.: Managing inventories in a two-echelon dual-channel supply
chain. Eur. J. Oper. Res. 162, 325–341 (2005)
5. Choi, T.-M., Sethi, S.: Innovative quick response programs: a review. Int. J. Prod. Econ. 127
(1), 1–12 (2010)
6. Chopra, S., Meindl, P.: Supply chain management: strategy, planning & operation. Springer
(2007)
7. Chu, C.-L., Leon, V.J.: Single-vendor multi-buyer inventory coordination under private
information. Eur. J. Oper. Res. 191(2), 485–503 (2008)
8. Chu, C.-L., Leon, V.J.: Scalable methodology for supply chain inventory coordination with
private information. Eur. J. Oper. Res. 195(1), 262–279 (2009)
9. Clark, A.J., Scarf, H.: Optimal policies for a multi-echelon inventory problem. Manage. Sci. 6,
475–490 (1960)
10. Dettenbach, M., Thonemann, U. W. (2014). The value of real time yield information in
multi-stage inventory systems—exact and heuristic approaches. Eur. J. Oper. Res. Available
online 30 June 2014
11. Diks, E.B., De Kok, A.G.: Optimal control of a divergent multi-echelon inventory system.
Eur. J. Oper. Res. 111 (1998)
12. Hajji, A., Gharbi, A., Kenne, J.-P., Pellerin, R.: Production control and replenishment strategy
with multiple suppliers. Eur. J. Oper. Res. 208(1), 67–74 (2011)
13. Lee, L.H., Billington, C.: Material management in decentrated supply chains. Oper. Res. 41
(5), 835–848 (1993)
14. Seo, Y., Jung, S., Hahm, J.: Optimal reorder decision utilizing centralized stock information in
a two-echelon distribution system. Comput. Oper. Res. 29, 171–193 (2002)
15. Van der Heijden, M.C.: Supply rationing in multi-echelon divergent systems. Eur. J. Oper.
Res. 101, 532–549 (1997)
16. Verrijdt, J.H.C.M., De Kok, A.G.: Distribution planning for a divergent N-echelon network
without intermediate stock under service restriction. Int. J. Prod. Econ. 38, 225–243 (1995)
17. Yao, D.-Q., Yue, X., Mukhopadhyay, S.K., Wang, Ziping: Strategic inventory deployment for
retail and e-tail stores. Omega 37(3), 646–658 (2009)
Unsupervised Neural Networks
for the Analysis of Business Performance
at Infra-City Level
Abstract The goal of this paper is using Neural Networks (NN) to analyze busi-
ness performance and support small territories development policies. The contri-
bution of the work to the existing literature may be basically summarized as
follows: we are focusing on the application of an unsupervised neural network
(namely: on Self-Organizing Maps—SOM) to discover firms clusters on
micro-territories inside city’s boundaries, and to exploit possible development
policies at local level. Although since early ’90 of the past century NN have been
widely employed to evaluate firms performance, to the best of our knowledge the
use of SOM of that specific task is much less documented. Moreover, the main
novelty of the paper relies on the attention to data at “microscopic” level: data
processing in an infra-city perspective, in fact, has been neglected till now, although
recent studies demonstrate that inequalities in economic and well-being conditions
of people are higher among neighbourhoods of the same city rather than among
different cities or regions. The performance analysis of a large set (7000 environ) of
companies settled in Genova, Italy permits to test our research method and to
design further applications to a large spectrum of territorial surveys regarding both
economic and social well-being conditions.
Keywords Neural networks Self organizing maps Knowledge management
Business performance Territorial development Inclusive growth
The recent economic crisis has been seriously impacted on the economic and social
well-being of citizens, contributing to increase inequalities among countries, races,
genders, regions and even cities. Several OECD1 indicators (regarding both eco-
nomic and non-economic well-being drivers) show that people have differently
suffered from the economic crisis depending on where they live [1].
An interesting point of view not deeply investigated so far concerns the role
played by micro-territories in influencing citizens’ quality of life and inequalities
[2]. As a matter of fact, territories are now playing a growing role in defining the
development policies, also thanks to the regionalization of EU policies and funding:
regions are the core government body considered by EU in depicting its own
policies. Furthermore, OECD focused the attention on smaller scale, collecting
statistical data on well-being, social and economic development not only at national
but also at infra-national level, hence taking regions, small regions (corresponding
to province or similar) and metropolitan areas into account [3].
However, this analysis efforts could not be enough: several studies demonstrate
that inequality is higher among neighborhoods belonging to the same metropolitan
area than among regions or cities. On the other side, micro-territories often are
crucial to determine the settlement of technological districts or regional clusters. It
is therefore important to further refine the survey scale, analyzing data concerning
smaller areas, because economic and social well-being determinants in city’s
neighborhoods, considerably influence people daily life [4].
Starting from this point, this work aims to develop and test a micro-territorial
dashboard based on neural networks to analyze data, hence supporting the
knowledge of small portions of metropolitan areas, and accordingly addressing
development policies finalized to strength local opportunities and to struggle
against inequality [5].
In order to develop a pilot, we analyzed data regarding business performance in
the Municipality of Genova. Genova is an industrial city and a port in Northern
Italy; it counts 600.000 inhabitants and it is portioned into nine administrative
districts. Our survey wants to investigate the relations among firms performance
and small territories, to discover—whether existent—the reciprocal influence of
positioning economies, territorial development and citizens’ well-being. In this first
application, our focus is on the emergence of firms clusters, that is groups of firms
characterized by similarities in their performance, as the presence of firms with
proper performance profiles seems an important driver of either well or bad-being.
In search of significant patterns of activity, we employed an unsupervised neural
network, namely: Self-Organizing Maps (SOM). The use of SOM in budgeting and
accountancy literature is generally testified by contributions aimed either to dis-
cover patterns of companies with similar strategic positioning in their reference
industry [6], or to control banks exposure to the risk of default [7]. However, in our
1
http://www.oecd.org/.
Unsupervised Neural Networks for the Analysis … 205
2 Methodology
Artificial Neural Networks (ANN) have features that make them appealing to both
connectionist researchers and individuals needing ways to solve complex problems,
thanks to their ability to facilitate the handling of large amounts of data [8]. The
reason for this is due to the fact that each node in a neural network is essentially on
its own an autonomous entity: each performs only a small computation in the
grand-scheme of the problem. The aggregate of all these nodes, the entire network,
is where the true capability lies.
Before the ANN can become useful for information retrieval, it must learn about
the information at hand. In general, there are three flavors of learning [9]: super-
vised, reinforcement and unsupervised learning. In the former case the training data
consist of a set of examples; each example in turn is a pair made up of an input
object (typically a vector) and a desired output value (also called the supervisory
signal). The data at disposal are then used to produce an inferred function, which
can be used for mapping new examples. The accuracy of the learned function is
controlled by monitoring the error (the bias) between the estimated and the desired
output. Typically the procedure ends when all the example pairs have been
examined and the error has been iteratively reduced at values very close to zero. In
reinforcement learning [10], on the other hand, the algorithmic machine interacts
with the input environment by producing actions a1, a2, … These actions affect the
state of the environment, which in turn results in the machine receiving some scalar
rewards (or punishments) r1, r2, …. The goal is to learn acting in a way that either
maximizes the future rewards the algorithm receives, or minimizes the punishments
over its lifetime. Finally, in unsupervised learning the machine simply receives
inputs x1, x2, . . ., but obtains neither supervised target outputs, nor rewards from its
environment: the network is simply asked to try on its own to discover patterns in
the input data. It may seem somewhat mysterious to imagine what the machine
could possibly learn, given that it doesn’t get any feedback from its environment.
However, it is possible to develop a formal framework for unsupervised learning
based on the notion that the main goal of the procedure is to find hidden structure in
the data, hence summarizing and explaining their key features. Kohonen’s
206 R.P. Dameri et al.
and:
h i
wij ðt þ 1Þ ¼ wij ðtÞ þ aðtÞh t; xðtÞ; wij ðtÞ xðtÞ wij ðtÞ : ð3Þ
where d xðtÞ; wij ðtÞ is the function that computes the distance among input pat-
terns and each node in M. Despite very sophisticate functions may be used, the
most common choice is the Euclidean distance:
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
0 ffi
dE xðtÞ; wij ðtÞ ¼ xðtÞ wij ðtÞ xðtÞ wij ðtÞ ð4Þ
the symbol ‘being the standard notation for transposition. The node satisfying (2) is
called winner or Best Matching Unit (BMU). The notation aðtÞ indicates a scalar
decreasing factor in the range (0,1) depending on time t that defines the size of the
correction: starting from values closer to one (that means maximum correction), as
time goes on aðtÞ decreases to values nearer to zero (no correction at all). Finally,
h is the neighborhood function; it models the distance between map nodes and
BMU. Function h may assume various shapes, but here we refer to the simplest one:
P
h pBMU ; pwij ; t ¼ et jpBMU pwij j ð5Þ
where pBMU ; pwij are the grid coordinates of both BMU and generic map nodes
respectively.
Unsupervised Neural Networks for the Analysis … 207
The goodness of the SOM representation of the input space can be evaluated by
several error measures [12]. Here we considered the Topographic Error (TE). TE is
the most simple of the topology preservation measures, working as follows: for all
data samples, the respective best and second-best matching units are determined. If
these are not adjacent on the map lattice, this is considered an error. The total error
is then normalized to a range from 0 to 1, where 0 means perfect topology pres-
ervation. The learning procedure is henceforth arrested when the TE is reasonably
close to zero.
Apart from theoretical considerations, the beauty of SOM is that it offers a nice
tool to project high dimensional input data into a two-dimensions lattice, according
to the principle that similar inputs are mapped into neighbor nodes. Consider for
instance Fig. 1, where a SOM map is shown, as the result of projecting onto the
neural space 4-dimensions input samples.
This figure uses the following coding: hexagons represent the neurons, the colors
indicate the distances between neurons; different tones of red refer to largest dis-
tances, while blue and lighter colors represent smaller distances. According to the
colors division, the network has clustered the data into three main groups. The color
difference indicates that data points in these regions are farther apart. The inter-
pretation of the results may be given at various level of detail. An example is the
study of how much the input components can affect the overall representation: this
information can be visually studied by examining the SOM weight planes, that is by
visualizing neurons coloring per single input component. Figure 2 offers a repre-
sentation of the four weights planes obtained from the map in Fig. 1.
In this way the analyst has the possibility to study both the organization of the
input space provided by the overall SOM (as in Fig. 1), and the impact of each
component into the overall structure of data (as in Fig. 2), hence deriving some
important pieces of information concerning the intrinsic features of the dataset.
208 R.P. Dameri et al.
The subject database for this study consisted in a sample of 7719 companies (cut off
date 31/3/2014) with registered offices in the Municipality of Genova (Smart city in
the Northern part of Italy). The extrapolation of the data uses the AIDA2 data bank.
Starting from the original data sample, we eliminated 16 firms because of lack of
relevant data, hence having at disposal the final data sample, made up by 7703
companies. All the companies are in the legal form of either limited companies or
cooperatives, with balance sheets regularly settled in the year 2012. This dataset
was employed to run both “conventional” performance analysis [13] and neural
networks, as we are going to describe in next subsections.
Our sample of companies was grouped according to several criteria, including: the
legal form, the geographic position within Genova, and the merchandise category.
The overall picture highlights the following situation:
• from the legal point of view, 95 % of the examined companies are limited
companies, the remaining 5 % is made up by cooperative companies (25 % of
whom are social cooperatives);
• from the geographical point of view, 61 % of the whole sample is mainly
localised in the central district (city centre); the remaining 39 % is almost evenly
distributed in the other 8 districts.
2
AIDA stands for: Analisi Informatizzata delle Aziende. It is a database provided by Bureau van
Dijk s.p.a (http://www.bvdinfo.com/it-it/home), giving information (mainly) about the balance
sheet of Italian companies.
Unsupervised Neural Networks for the Analysis … 209
3
ATECO is the abbreviation for Attività Economiche, and it is the Italian conversion, made by
ISTAT to fit the Italian situation, of the Eurostat classification for Economic Activities. See: http://
www.istat.it/it/strumenti/definizioni-e-classificazioni.
210 R.P. Dameri et al.
Table 1 Indicators frequency distribution: an analysis based on the legal form of the Genovese
companies
F/K – >80 % 80–60 % 60–40 % 40–20 % <20 % –
Tot. 1118 685 730 1159 4013 –
Lim.co 1099 664 690 1097 3798 –
Coop 19 21 40 60 215 –
N/K – >66 % 66–50 % 50–40 % 40–25 % <25 % –
Tot. 1210 657 552 1090 4194 –
Lim.co 1181 642 532 1044 3949 –
Coop 29 15 20 46 245 –
ROE – >100 % 100–50 % 50–20 % 20–10 % <10 % –
Tot. 366 512 1058 825 4942 –
Lim.co. 323 486 1013 803 4723 –
Coop 43 26 45 22 219 –
ROA – >100 % 100–50 % 50–20 % 20–10 % <10 % –
Tot. 5 114 487 765 6332 –
Lim.co 5 110 474 738 6021 –
Coop 0 4 13 27 311 –
Employees – >100 100–20 20–5 5–1 0 n.d
Tot. 105 523 1854 2234 2807 180
Lim.co 81 470 1761 2139 2726 171
Coop 24 53 93 95 81 9
Wages/VA with VA > 0 – >50 % 50–20 % 20–10 % 10–5 % <5 % n.d.
Tot. 2810 1313 183 72 1190 6
Lim.co 2582 1291 182 67 1881 5
Coop 228 22 1 5 38 1
Amort./VA with VA > 0 – >50 % 50–20 % 20–10 % 10–5 % <5 % n.d.
Tot. 743 1005 1038 1178 2339 0
Lim.co 725 984 1000 1144 2155 0
Coop 18 21 38 34 184 0
VP/emp. with VP > 0 – >1 mln 1–0.4 0.4–0.2 0.2–0.05 0.05–0 n.d
Tot. 226 529 839 2361 758 2984
Lim.co 219 522 827 2290 590 490
Coop 7 7 12 71 168 67
In this section we illustrate how to use SOM to obtain results with high visual
impact and robust significance from a technical viewpoint, overcoming the limits of
the traditional performance analysis listed in the above.
Our dataset consists in a 7703 × 14 input matrix, where each row represents a
firm settled in Genova, while each column is composed by the indicators already
introduced in Sect. 3.1 (with the exception of the Employees number and of the
ratio AMM/VA) to whom we added seven indicators anew: revenues, value added,
wages, amortization, EBIT, interests on debt, taxes, and net profit. In this respect,
this means adding complexity to our analysis. Nevertheless, being our aim to
explore the intrinsic nature of data, i.e. their hidden features, we think that in this
way it is possible to offer a more complete picture of the situation for the companies
located in the area of Genova.
Before running SOM, data of each column was pre-processed and rescaled
according to the formula: cmin
rangec , where c is the column data, minc the minimum
c
observed in the column, and rangec is the column range of values. We then tested
212 R.P. Dameri et al.
Table 3 Indicators frequency distribution: situation of companies in the ATECO sectors 10–33;
35–44; 45–99
F/K – >80 % 80–60 % 60–40 % 40–20 % <20 % –
10–33 37 58 95 150 343 –
35–44 94 49 60 154 628 –
45–99 976 573 570 852 3021 –
N/K – >66 % 66–50 % 50–40 % 40–25 % <25 % –
10–33 70 68 64 123 358 –
35–44 121 71 60 126 607 –
45–99 1009 516 427 836 3204 –
ROE – >100 % 100–50 % 50–20 % 20–10 % <10 % –
10–33 34 28 103 73 445 –
35–44 41 74 129 95 646 –
45–99 285 408 824 655 3820 –
ROA – >100 % 100–50 % 50–20 % 20–10 % <10 % –
10–33 0 8 38 68 569 –
35–44 1 10 62 96 816 –
45–99 4 96 386 600 4906 –
Employees – >100 100–20 20–5 5–1 0 n.d.
10–33 19 112 283 172 87 10
35–44 11 53 267 241 399 14
45–99 74 357 1293 1808 2304 156
Wages/VA with VA > 0 – >50 % 50–20 % 20–10 % 10–5 % <5 % n.d.
10–33 357 170 17 3 70 0
35–44 320 177 26 10 232 0
45–99 2116 961 139 59 1612 6
Amort./VA with VA > 0 – >50 % 50–20 % 20–10 % 10–5 % <5 % n.d.
10–33 39 88 147 149 194 0
35–44 66 99 85 152 363 0
45–99 628 815 801 873 1776 0
VP/emp. with VP > 0 – >1 mln 1–0.4 0.4–0.2 0.2–0.05 0.05–0 n.d
10–33 17 61 109 327 72 80
35–44 21 32 76 355 87 317
45–99 188 435 649 1671 587 2070
different map dimensions, in search for those assuring the best topographic error
(TE). In this respect, we are now going to discuss the results obtained with a
20 × 20 SOM, that led TE below the value 0.0002 (very close to zero).
Figure 3 shows the overall SOM we obtained.
Assuming the coding conventions already described in Sect. 2, we observe that
there is a pattern of neurons having a very similar color, varying from deep blue to
lighter blue: this means that we have a very high number of companies sharing
Unsupervised Neural Networks for the Analysis … 213
similar performances that position them in the low range of the performance scale.
We can also note that there are three smaller areas whose color (yellow-orange-red),
let us to think they can be accounted for medium/high performance values.
In order to better understand the performance determinants and the role of each
performance indicator, we now display (Fig. 4) the SOM components (indicators)
planes. We can observe that although from Fig. 3 the general performance level of
many companies was very similar, the same does not apply to all the performance
indicators which show different values and hence lead to a more significant com-
panies clustering.
References
1 Introduction
loyalty, retention and reduces customer attrition (churn). Therefore, this is a key
element for them to improve their business and profitability.
There are several types of CEM systems, some are based on Voice of the
Customer (VoC). Those systems collect the feedback from operator’s
end-customers through direct and indirect questioning. However those systems
present two main drawbacks: One is that this is too slow. CSPs need to wait to
know the opinion of their customers. The other one is that they do not consider the
whole end-user population but a group of it.
Pre-emptive Customer Experience Management systems collect the Customer
Experience data through network experience survey data gathered from the oper-
ator’s network and IT systems rather than directly from the end-users. This method
achieves greater acceleration of time and efficiency as it does not need to wait for
the customer’s feedback and it takes into account the totality of the end-user’s data.
Because of it, the operators benefit from an increased customer satisfaction rates, a
higher customer loyalty and therefore lower churn.
2 Background
This study used qualitative research methods to study the different industry cases and
developments, expert interviews and comparing those results with the conceptual
framework of CEM.
220 D. Delibes Rodriguez and P. Hart
Research Design
The reality is that at the moment only few Mobile CSPs have implemented CEM
systems or even they lack of a proper strategy in place to implement a CEM.
Customer Experience Management systems are in continuous evolution. There are
different design and implementation approaches. Besides, those systems need to be
integrated within the existing CSP’s networks and information systems. Therefore
this requires a high level of customization. Due to the small number of imple-
mentations, the wide variety of approaches and complex customizations, the
competence in this area falls into a very small number of experts.
Research Questions
The main research questions of the study were the following:
• What are CEM pre-emptive systems? How Mobile CSPs can benefit from CEM
systems?
• What are the key differences between pre-emptive CEM systems and other types
of non-pre-emptive CEM systems like VoC systems? What are the advantages
and disadvantages of pre-emptive CEM systems compared to other type of
systems?
• What are the processes, methodologies and the metrics to take into account for
the design of pre-emptive Customer Experience Management systems? What
are the main design challenges of these systems?
• How is CEM introduced in CSPs organizations?
Design Components
The data used in this study was mainly collected from case studies and interviews
with experts. Those case studies and expert’s interviews were coming from the ICT
industry, i.e.: companies providing services to CSPs, system suppliers, telecom-
munications vendors etc.
Sampling:
As this is a qualitative research study with focus on expert’s interviews, the
amount of interviews and case studies was very limited, therefore the use of
sampling had not much significance here. Data was collected from the selected
interviewees and related case studies. Those case studies and groups of people were
coming from:
• Companies providing services to CSPs (i.e.: consulting services)
• System suppliers (i.e.: companies that develop and supply monitoring systems
for CEM)
• Telecommunications vendors (i.e.: companies that develop Mobile CSPs tele-
communications infrastructure and also monitoring systems)
The groups of people, cases etc. were selected from the same industry area and
focus (criterion sampling) and in order to be able to make a comparable case
selection. Another criterion for the selection of the groups of people and persons for
Design of Pre-emptive Customer Experience Management Systems … 221
the interviews was the persons or group of persons that could provide the best and
richest information (intensity sampling).
Intended comparison and generalization level:
In this particular case, as this is a qualitative research study, the generalization
level here was very low.
Data Collection
As it was highlighted previously, experts in this field are scarce and solutions are
not standardized. That is why qualitative research has been selected, in this case, as
the research method for this study, because the goal is to investigate a phenomenon
that is not quantifiable but it is needed to gather an in-depth understanding of
human behavior and the reasons that govern such behavior.
Because of the type of research, unstructured questions were used here because
we wanted the interviewee to provide the own views on the topic being researched.
In this case, there was no questionnaire. However, a framework on the topic of
research was defined where the interviewee was allowed a considerable freedom to
move. The predominant type of questions here were verbal and open ones.
As experts in this field are spread over the world, data collection within this
study was done in form of telephone interviews or video conference (i.e. using
Skype). Recordings were taken in order to avoid forgetting the answers, comments
etc. Also notes were taken, as part of the observation to collect other relevant
information such as attitude of the interviewee during the interview, confidence in
the subject and when answering the questions, commitment, openness, doubts etc.
Data coming from documents was used to enhance the data collected via
inter-views. In addition, the information contained in documents might fail to
address exactly the topic of the research. Interviews are time consuming and if data
is not critically analyzed, it might produce inaccurate results i.e. bias in interviews
or irrelevant questions in questionnaire.
Analysis
In order to be able to categorize the different items, a coding system was developed
to “label”, classify and compare the data and the answers from the interviews.
Triangulation was used in the sense that not only the data from the interviews was
used, but also the notes taken during those interviews, the data from case studies,
observations etc. This data was also taken apart, coded and categorized with the aim
of establishing comparisons. The coding system was established using a hierar-
chical structure based on the hierarchical schema of questions for the interview. To
analyze all the data collected, the answers from each interviewee were mapped with
its corresponding question and each question was mapped to one of the research
questions defined at the beginning of this research study. That was all put on a table
structure in excel and with the help of pivot tables it was easy to compare the
answers from each of the interviewees for the same questions and to observe the
differences (Table 1).
222 D. Delibes Rodriguez and P. Hart
4 Discussion
Those systems have to collect the data from all the end-users and their services in
real time and they have to be able to process all this data, as well, in real time. CSPs
Design of Pre-emptive Customer Experience Management Systems … 223
need to analyze not only the performance data for all the subscribers, services and
terminals but also behavioral data. Another big difference of pre-emptive systems
compared to other systems like VoC, is that those systems collect a lot of data from
each subscriber directly from the systems and network components of the CSP. The
data extracted from those systems is objective, quantifiable and therefore measur-
able. With this data, coming from each subscriber, it can be possible to categorize it,
make user profiles, groups according to different criteria and be able to and
benchmark it.
Pre-emptive CEM systems generate a huge amount of data that needs to be
stored and post-processed. Therefore they require powerful architectures with Big
data components that are able to collect in real time hundreds of gigabits per second
and storage several hundreds of terabytes of information. Without a flexible
architecture based on clouds or scalable architectures based on blades that could
pose a problem for certain CSPs to adopt CEM due to lack of space availability.
The other big challenge in the design of those systems, a part of the huge
amounts of data generated and the huge amount of physical storage needed is from
where and how to collect the end-user related data. In traditional network systems
the end-user related data was not extracted because it was not important.
Traditionally network systems were extracting data for network performance
purposes rather than end-user experience purposes. The different is big because
traditional systems were aggregating the network data and now each end-user
session and its content is important. Therefore, there is here the need of special
systems that extract such data directly from the end-users. There are many systems
in an operator that collects user related data like CRM, Billing systems etc.
However, the detailed level of such systems is not high enough. Those systems are
still important and can be integrated to the whole CEM system but it is still needed
the content of the sessions from the end-users, location related information, etc.
There are several ways to gather and collect this data but the most common ones are
the network probes as they collect the end-user related information from the net-
work interfaces of the CSP. The closer they are to the end-user, the most accurate
and detailed is the information provided by those systems. Therefore, the best
source of information is the end-user terminal itself. These are called end-user
“agents” and they are software components installed at the end-user terminal. The
agents collect the end-user related information and send it through the network to
the CEM systems that collect all this information coming from the end-users.
However, the end-users they need to agree to have such software component in the
terminal that gathers very detailed information on the person. On the top of that,
these software components tend to decrease the processing capacity of the terminals
and potentially can end up being the source of bad quality of experience for the
end-user. That is why probe systems are the most popular ones among CSPs to
collect the data from the end-users. They are not intrusive systems and CSPs do not
need the explicit consent of the end-user.
224 D. Delibes Rodriguez and P. Hart
4.3 Metrics
There is no a unified or standard way for the metrics in pre-emptive CEM systems.
CEM is transversal and cross-functional concept that involves the whole organi-
zation with all its departments. Furthermore CEM systems are not standard ones but
they are designed according to the strategy of each CSP. Therefore, the metrics will
depend very much on what is intended with the CEM system and what departments
within the organization will use them. Those metrics can be tailored to each of the
departments and type of the operators. A CSP could use not only a set of metrics
but several ones at the same time. There can be metrics used for benchmarking
purposes, for business, network quality purposes, etc. Those end-user oriented
metrics build from the raw data (KPIs) of other system components are normally
KQIs and those can be also mapped or transformed in other more marketing
oriented ones like for instance NPS (Table 2).
CEM is a holistic and E2E concept and therefore, it affects the whole CSP’s
organization. The introduction of CEM systems has to be done in a Top-down
approach and for that it is essential to get the top management involvement.
Without CSP’s top management involvement, it is not possible to introduce CEM in
the organization. After getting the involvement of the top management, it will be
needed the involvement of the different departments in CSP’s organization. The
reason is that CEM is not a system aimed for the operations department only, as
they were the traditional NMS systems. CEM is something for the whole organi-
zation and brings a huge value also for sales, marketing and customer service
departments. In the design of such systems it has to be taken into account the
different requirements of the different departments that will make use of the system.
The same system can be used by different persons in different departments by
designing different views and different metrics for them.
Table 2 Difference between NMS and CEM systems (adapted from [14])
Network management system Customer experience management system
(NMS) (CEM)
Orientation Network oriented End-user oriented
Focus Network element/system Per user & per service performance (E2E)
performance
Metrics KPI dominated QoE KQI dominated
Analysis Bottom-up approach Top-down approach
Design of Pre-emptive Customer Experience Management Systems … 225
CEM is not a technical monitoring and troubleshooting tool for CSP’s networks.
CEM is mainly a powerful business tool that helps top management and the whole
organization to get a deeper inside of its customers and to understand them better.
To understand their behavior, interests and what they experience when using CSP’s
ser-vices. It helps to find out areas of improvement in the network, services, types
of users, terminals, etc. In order to attract the attention of the top management, those
advantages have to be highlighted. CEM helps retaining CSP’s customers by
increasing their loyalty through improving their experience and satisfaction of
CSP’s ser-vices.
CEM is a transversal and cross-functional concept. It breaks the traditional silos
of the operators. CEM is a concept that the whole department should be aware
about and use it. If an operator wants to become more customer-centric then the
whole organization has to move in that direction. However, this transformation,
when introduced inside the operator, should not be made as revolutionary concept.
This is a key issue in order to introduce CEM in a CSP. CEM is not replacing or
changing the existing departments and the existing systems and tools. It should not
be seen as a revolutionary concept in the way that it is not changing the existing
organization, systems etc. but rather as an evolution or a transformation concept.
The goal of CEM is not to change the existing things and it is not a requirement for
its introduction. CEM is a layer above in the organization that makes the whole
organization to become customer-centric without changing the existing
setup. Concerning the traditional NMS monitoring systems, they will be still used
for network performance, configuration and fault management. In addition, it is
very likely that during the introduction of CEM, the NMS will be integrated to the
CEM system. CEM is not an standard approach and it won’t arise the interest of all
CSPs. Those CSPs more quality and customer oriented they will be the ones more
with the highest chances to get them implemented. Other CSPs that focus their
strategy on low-price services might not show a big interest in them. Before trying
to introduce CEM in a CSP, it is very important to understand what is the business
strategy and market position of the operator. In which position is now and where it
wants to be in the future.
By now, CSPs are getting aware of the benefits of CEM and they are slowly
implementing it. However the big implementation costs, the complexity of making
visible the benefits of it, especially at the very beginning, and the fact that its
holistic approach has to be understood by the whole company and not only a few
departments, that makes the introduction of CEM to move very slowly. Another
problem is that the large network equipment vendors tried to introduce CEM
systems by using a Bottom-up approach instead of the Top-down approach that
those systems require. However, as the author of this paper believes, as well as
the majority of the interviewed experts in the subject, the fact that the visibility of
the benefits of CEM is a difficult thing, that does not mean that the benefits are
not there. In a way or another CEM will be finally introduced, at least, in the most
customer aware operators. The benefits of CEM are and will be visible in the future
not only for the operators but also for their customers. The ultimate goal is to
226 D. Delibes Rodriguez and P. Hart
increase the satisfaction of the end-users and get their loyalty. The aim of the author
of this paper was to find out what the whole concept of CEM is and what the
benefits are (Fig. 1).
5 Conclusion
CEM is not only a new technical system for MBB CSPs. Rather than that it is a
whole business concept that MBB CSPs need to introduce in their organizations in
order to become more customer-centric. The aim and benefits of CEM is to increase
loyalty and reduce churn through increasing customer satisfaction by means of
im-proving the perception experience of CSP’s customers for their services.
Pre-emptive CEM systems need to be proactive and in real time rather than
reactive and slow. The goal is to find out the problems even before the end-users
realize about them. This is the main difference compared to VoC systems.
The success of the pre-emptive CEM systems is to gather detailed end-user
information and the content of their sessions. This allows CSPs to obtain a deeper
inside of their customers and to be able to understand them better. The closer those
systems get to the end-user, the better and more accurate information they will get.
CEM systems can be integrated with other systems like CRM, billing etc. However,
in order to obtain the end-user content, probe systems become the most useful ones
because they gather a lot of information from the end-users and those systems are
not intrusive. The main problem of pre-emptive CEM systems is the huge amount
of data storage that they need. The technical architecture of the systems must be
flexible and scalable to overcome space limitations in CSPs.
Design of Pre-emptive Customer Experience Management Systems … 227
The design and the introduction of such CEM pre-emptive systems have to be
Top-down rather than Bottom-up. In order to introduce such system in a CSP it is
essential the top management attention and commitment. CEM systems are trans-
versal and cross-functional and involve many different departments in the organi-
zation. There is no standard way to design a CEM system. It will depend very much
on the business strategy of the CSP. The metrics and views of the system have to be
customized for the different needs of the organizations and their departments.
CEM is not a revolution for CSPs but rather and evolution. They do not replace or
change the existing organization, tools etc. but they rather creates a new layer on top
that helps them to become more user-centric and therefore increase their revenue.
References
20. Geller, B., Krahn, O.: Taking Care of the Customer Experience. Alcatel-Lucent. http://www2.
alcatel-lucent.com/techzine/taking-care-of-the-customer-experience/ (2011). Accessed 13 Dec
2011
21. Owens, G., Torres,C.: TechZine: Tracking the Growing Case for CEM. Alcatel-Lucent. http://
www2.alcatel-lucent.com/techzine/tracking-the-growing-case-for-cem/ (2012). Accessed 5
Nov 2012
Economic Denial of Sustainability
Mitigation in Cloud Computing
Keywords Cloud security Service level agreement Economic denial of sus-
tainability Intrusion prevention Attack mitigation
1 Introduction
(CSPs) and offer their services to end-users. By exploiting the cloud elasticity, such
applications are able to self-scale, improving and reducing the amount of needed
resources, depending on the end-users requests. On the other hand, due to their
openness to the Internet, applications are prone to cyber attacks, such as Distributed
Denial of Service (DDoS), which aim at reducing the service’s availability and
performance, by exhausting the resources of the service’s host system (including
memory, processing resources, and network bandwidth) [1].
As presented by Francois et al. [2], a resource competition approach can be
adopted for mitigating DDoS attacks to cloud applications and services. Therefore,
additional resources should be either already acquired, or acquired dynamically on
demand, in order to face peaks of load due to DDoS attacks. On the other hand,
such resources are not free, and a cyber attack could make it economically pro-
hibitive. Such a scenario is called ‘Economic Denial of Sustainability’ (EDoS) [3,
4]. The inability of the cloud service infrastructure to diagnose the causes of service
performance degradation (i.e., if it is due to either an attack or an overloading), can
be considered as a security vulnerability, which can be exploited by attackers in
order to exhaust all the cloud resources allocated to satisfy the negotiated Quality of
Service (QoS).
The challenge in this research area is to mitigate attack effects, by minimizing
the number of fraudulent malicious sources. In previous papers [5], we presented a
strategy to generate patterns able to perform EDoS attacks against cloud applica-
tions, exhibiting a stealthy behavior which cannot be distinguished from a common
user sequence of requests. Therefore, specific detection strategies should be defined,
which should be able to analyze single client flow in order to differentiate legitimate
users from malicious uses which consume a significant volume of resources in a
very short time. Limiting the impact of individual malicious source reduces the
overall effects of an EDoS attack. However, considering the large number of clients
(several thousand) used during such assaults, and the difficult of detecting the
anomalous behavior associated to each attack flow, a lot of time may elapse until
the attack can be stopped.
Therefore, in this paper, we propose a solution for mitigating economic effects
of EDoS attacks against cloud applications. Although, we are not able to clearly
identify the sources of an EDoS attack, the proposed approach can be used to
mitigate the effects of such a kind of attack, in the meantime that, all the sources
are identified and the attack is stopped by a more sophisticated countermeasure.
The proposed approach has a side effect of reducing the quality of services
offered to end-users, but granting the economic sustainability of the provided
services.
The rest of the paper is organized as follows:
Section 2 presents the related work in the field of detection of EDoS attack
against cloud applications. Section 3 describes EDoS characteristics. The proposed
mitigation approach is presented in Sects. 4 and 4.1. Section 4.2 describes the
implementation of the adopted framework. Section 5 presents a short summary and
future work.
Economic Denial of Sustainability Mitigation in Cloud Computing 231
2 Related Work
Several works have proposed techniques for mitigating EDoS attacks, which can
inflict significant effects on cost incurred on legitimate customers.
In order to reduce the number of application-level EDoS connection requests,
HinKhor and Nakao [6] have proposed a mitigation mechanism, which requires a
proof of work from the end-user, before providing new resources to the client.
According to such a mechanism, end-users have to solve a crypto-puzzle, which is
used by the server to encrypt the communication channel. However, such mecha-
nism can be exploit by attackers. They send a huge number of requests for puzzles
without solving them, which can lead to an exhausting DDoS attack against the
puzzle-server. Moreover, adding defenses, such as sophisticated filter, should be
adopted to verify whether the incoming requests are from a legitimate user or
generated by bots [7].
Several works have proposed overlay networks to hide the location and the
characteristics of the target application, and thus, prevent DoS attacks [8, 9].
However, due to indirection, the overlay routing could increase the end-to-end
latency.
Chonka et al. [10] have proposed a service oriented trace back architecture to
protect from the XML-DDoS attacks. It is used to identify the sources of the attack
and filter it. The cloud protector is a trained back propagation neural network.
However, attackers could evade such mechanism by launching their attack through
zombie clients with spoofed IP addresses.
The most practical detection approach should consist in reviewing bills over time
to determine if they are fraudulent consumptions within an expected time range.
Therefore, Amazon [11] offers a Web service to monitor the provided cloud
resources. By using this service, customer can define thresholds to limit the
scaling-up of their cloud resources.
As previously described, in this paper, we focus on a set of attacks that relies on the
typical features of a cloud environment: on-demand self service and resource
pooling. In particular, according to the cloud elasticity notion, the Cloud Service
Customers (CSCs) are enabled to dynamically vary the amount of resources they
need. A clear example of such feature are the self-scaling solutions offered by many
Infrastructure as Service (IaaS) cloud providers: the CSC defines a simple policy
and the CSP automatically increases the amount of resources available to the
end-users, without any additional effort. A side effect of such behavior is that, being
the resources paid depending on their usage, a wrong policy may lead to an
uncontrollable increase of the costs. EDoS attacks aim at exploiting such feature,
making it economically unsustainable.
232 M. Ficco and M. Rak
In order to reduce the effect of EDoS attacks, we propose an approach based on the
adoption of Service Level Agreement (SLA), combined with an Intrusion
Prevention System (IPS).
An SLA is an agreement among the CSC and the CSP [17]. It specifies the
quality of delivered services, and states the duties of both parties. Moreover, the
SLA describes both the actions a CSP has to perform when a guarantee term is not
respected and the penalties to be paid. The SLA states the guarantees through a set
of Service Level Objectives (SLOs), that quantitatively states the level of quality to
be granted. In our case study, we focus on SLAs offering an SLO, which states the
availability of the service, i.e., the ability of a software application to be in a state
to provide the required function at a given instant of time.
Due to the adoption of SLAs, the problem can be redefined as follows: the cloud
application offers its services to end-users and grants them a fixed availability (e.g.,
it grants that every day the service will be available as 23 h). Moreover, the cloud
application has a penalty associated to service requests, i.e., every day during which
the grant SLO is not respected implies a payment of a fixed penalty to the CSC, if
their end-users are not enabled to access it.
Introduction of the SLAs has the effect of clarifying the trade-off the CSP has to
face: on the one hand, the penalties to be paid to CSC, on the other hand, the costs
of acquiring the resources. Therefore, it is possible to build up a model that takes
into account both the estimation of the cost of scaling (i.e., the cost of additional
resources) and the cost of the missing scale (i.e., the penalties), enabling the cloud
application to perform an acceptable choice, without penalizing the economic
capabilities of the system.
The mitigation approach consists in splitting the end-users in classes on the base
of the IP addresses and the penalty cost defined by SLA. Then, the access to the
service is denied to the class of IP addresses that more likelihood contains the larger
number of malicious sources. This approach allow to maintain the costs of the
234 M. Ficco and M. Rak
greater than the cost necessary to add new resources. In particular, assuming that
CPUbase is the base price charged for the smallest amount of CPU time allocation,
and RCPU is the CPU to be added at the new allocation, CPUadded is the fee charged
if the system scales-up. If the class of IP addresses to be denied is not identified,
then, the system is scaled-up.
we activate the algorithm only if the cost of the resources is higher than the half
of the (cost_paid_by_user)*(num_users). Moreover, we adopt the following scaling
policy: we acquire new resources when CPU_usage and MEM_usage of all nodes
are higher than 95 % in the last 10 min and Timeout_Services_ Minute[10, 0] is
higher than Timeout_Services_Minute [20, 10]. In other word, we scale only if all
the resources are busy and the services are going to be unavailable.
5 Conclusions
References
1. Ficco, M., Tasquier, L., Di Martino, B.: Interconnection of federated clouds. In: Intelligent
Distributed Computing VII, Studies in Computational Intelligence, 2014, vol. 511, pp. 243–248
2. Francois, J., Aib, I., Boutaba, R.: Firecol, a collaborative protection network for the detection
of flooding DDoS attacks. IEEE/ACM Trans. Networking 20(6), 1828–1841 (2012)
3. Baig, Z.A., Binbeshr, F.: Controlled virtual resource access to mitigate economic denial of
sustainability (EDoS) attacks against cloud infrastructures. In: Proceedings of the International
Conference on Cloud Computing and Big Data, Dec 2013, pp. 346–353
4. Kumar, M.N., Sujatha, P., Kalva, V., Nagori, R., Katukojwala, A.K., Kumar, M.: Mitigating
economic denial of sustainability (EDoS) in cloud computing using in-cloud scrubber service.
In: Proceedings of the 4th International Conference on Computational Intelligence and
Communication Networks, 2012, pp. 535–539
5. Ficco, M., Rak, M.: Stealthy denial of service strategy in cloud computing. IEEE Trans. Cloud
Comput. 13(4), 737–751 (2014)
6. HinKhor, S., Nakao, A.: sPoW: On-demand cloud-based eDDoS mitigation mechanism. In:
Proceedings of the 5th Workshop on Hot Topics in System Dependability, 2009, pp. 1–6
7. Sqalli, M.H., Al-Haidari, F., Salah, K.: EDoS-shield—a two-steps mitigation technique against
EDoS attacks in cloud computing. In: Proceedings of the 4th IEEE International Conference
on Utility and Cloud Computing, 2011, pp. 49–56
238 M. Ficco and M. Rak
8. Beitollahi, H., Deconinck, G.: Fosel: Filtering by helping an overlay secure layer to mitigate
dos attacks. In: Proceedings of the 7th IEEE International Symposium on Network Computing
and Applications (NCA), July 2008, pp. 19–28
9. Ping, D., Nakao, A.: DDoS defense as a network service. In: Proceedings of the IEEE Network
Operations and Management Symposium (NOMS), Apr 2010, pp. 894–897
10. Chonka, A., Xiang, Y., Zhou, W., Bonti, A.: Cloud security defence to protect cloud
computing against HTTP-DoS and XML-DoS attacks. Int. J. Netw. Comput. Appl. 34, 1097–
1107 (2011)
11. Amazon CloudWatch, Amazon Website, available at http://aws.amazon.com/cloudwatch/,
May 2014
12. Yu, S., Tian, Y., Guo, S., Oliver Wu, D.: Can we beat DDoS attacks in clouds? IEEE Trans.
Parallel Distrib. Syst. 25(9), 2245–2254
13. Ficco, M., Rak, M.: Intrusion tolerant approach for denial of service attacks to web services.
In: Proceedings of the 1st International Conference on Data Compression, Communications
and Processing (CCP), June 2011, pp. 285–292
14. Ficco, M., Rak, M.: Intrusion tolerance as a service: a SLA-based solution. In: Proceedings of
the 2nd International Conference on Cloud Computing and Services Science (CLOSER), Apr
2012, pp. 375–384
15. Ficco, M., Rak, M.: Intrusion tolerance of stealth DoS attacks to web services. In: Information
Security and Privacy, LNCS, vol. 376, pp. 579–584, 2012
16. AlEroud, A., Karabatis, G.: Toward zero-day attack identification using linear data
transformation techniques. In: Proceedings of the IEEE 7th International Conference on
Software Security and Reliability (SERE), 2013, pp. 159–168
17. Amato, A., Venticinque, S.: Multi-objective decision support for brokering of cloud SLA. In:
Proceedings of the 27th International Conference on Advanced Information Networking and
Applications Workshops, 2013, pp. 1241–1246
18. Ficco, M., Rak, M., Di Martino, B.: An intrusion detection framework for supporting SLA
assessment in cloud computing. In: 4th International Conference on Computational Aspects of
Social Networks (CASoN 2012), Sao Carlos, Brazil, Nov 2012, pp. 244–249
19. Ficco, M.: Security event correlation approach for cloud computing. J. High Perform. Comput.
Networking 7(3), 173–185 (2013)
20. Joshi, B., Vijayan, A.S., Joshi, B.K.: Securing cloud computing environment against DDoS
attacks. In: Proceedings of the International Conference on Computer Communication and
Informatics (ICCCI), 2012, pp. 1–5
21. Coppolino, L., D’Antonio, S., Formicola, V., Romano, L.: Enhancing SIEM technology to
protect critical infrastructures. In: Critical Information Infrastructures Security, LNCS, vol.
7722, no. 2013, pp. 10–21
Brokering of Cloud Infrastructures Driven
by Simulation of Scientific Workloads
1 Introduction
Until 20 years ago, users of computing environments could count just on a number
of resources that did not allowed the resolution of problems on a large scale. As a
result, also due to the high costs of acquisition and management of large computing
systems, it has been spreading the idea of using resources that are not homogeneous,
located in different sites, which are aggregated to form large distributed computing
centers. Therefore, the idea of the computation is increasingly linked to the concepts
of collaboration, sharing of resources, and we have seen the emergence of new
computing paradigms and protocols that would allow interaction between distributed
resources. Generally Grid is an infrastructure for hardware and software that allows
to take advantage of a large amount of resources, in aggregate, providing high
computing power and storage. These resources are typically heterogeneous and
geographically distributed and they are accessed through abstract interfaces and
unitary, that hide the complexity of multi-level infrastructure. Nevertheless the main
Grid model is static, so users cannot add or modify computational resources in
accordance to their needs. Besides it is not possible to dynamically modify the
resources on the basis of the real system workload. Another, more recent paradigm
of distributed computing is Cloud Computing, that has spread first in areas other than
strictly scientific (like Amazon and e-commerce). From the point of view of access to
the infrastructure of computing, Cloud Computing can be seen as an evolution of the
Grid, since it uses web-based technologies and utilizes the hardware virtualization
as a basis for distributed computing infrastructure. Cloud computing also pro-
vides various levels of abstraction to identify resources, viewed through ser-
vice models, such as Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS), or
Infrastructure-as -a-Service (IaaS). Cloud Computing is attracting new applications,
such as scientific ones, that have benefited from distributed environment like Grids.
For those reasons, in this paper we propose an approach that provides the flexibility
of Cloud Computing avoiding the need for users to learn new resources access and
use models, using the combination of the Grid and Cloud paradigms.
2 Related Work
Both Grid and Cloud are technologies that have been conceived to provide users
with handy computing resources according to their specific requirements. Grid was
designed with a bottom-up approach [16]. Its goal is to share hardware or software
among different organizations by means of common protocols and policies. The
idea is to deploy interoperable services in order to allow the access to physical
resources (CPU, memory, mass storage, …) and to available software utilities.
Users get access to a real machine. Grid resources are administrated by their
owners. Authorized users can invoke Grid services on remote machines without
paying and without service level guarantees. A Grid middleware provides a set of
API (actually services) to program an heterogeneous, geographically distributed
system. On the other hand, Cloud technology was designed using a top-down
approach. It aims at providing its users with a specific high-level functionality: a
storage, a computing platform, a specialized service. They get virtual resources
from the Cloud. The underlying hardware/software infrastructure is not exposed.
The only information the user needs to know is the QoS of the services he is paying
Brokering of Cloud Infrastructures Driven by Simulation … 241
for. Bandwidth, computing power, storage represent parameters that are used for
specifying the QoS and for billing. Cloud users ask for an high level functionality
(Service, Platform, Infrastructure), pay for it and become owners of a virtual
machine. From a technological point of view, virtualization is exploited to build an
insulated environment, which is configured to meet users’ requirements and is
exploited for easy reconfiguration and backup. A single enterprise is the owner of
the Cloud platform (software and underlying hardware), whereas customers become
owners of the virtual resources they pay for. Cloud supporters claim that the Cloud
is easy to be used [16], is scalable [13], always gives users exactly what they want.
On the other hand, Grid is difficult to be used, does not give performance guar-
antees, is used by narrow communities of scientists to solve specific problems and
does not actually support interoperability [16]. Grid fans answer [8] that Grid users
do not need a credit card, that around the world there are many examples of
successful projects, that a great number of computing nodes, connected across the
net, executes large-scale scientific applications, addressing problems which could
not be solved otherwise. Grid users can use a reduced set of functionalities and can
develop simple applications, or they can get, theoretically, infinite amount of
resources. As always, truth is in the middle. Some users prefer to pay since they
need a specific service with strict requirements, and require a guaranteed QoS.
Cloud can provide this. Many users of the scientific community look for some sort
of super-computing architecture to solve intensive computations that process huge
amount of data, and do not care about getting a guaranteed performance level. The
Grid can provide it. But, even on this last point, there are divergent opinions. To
understand why Grids and Clouds should be integrated, we have to start from
considering what the users want and what these two technologies can provide. Than
we can try to understand how Cloud and Grid can complement each other, and why
their integration is the goal of intensive research activities [15]. We know that a
supercomputer runs faster than a virtualized resource. For example, a LU bench-
mark on EC2 (the Cloud platform provided by Amazon) runs slower, and some
overhead is added to start VMs [9]. On the other hand, the probability to execute an
application in fixed time on a Grid resource depends on many parameters and
cannot be guaranteed. As experimented in [9], if 400 ms is the time that an EC2
requires to execute an LU benchmark, the probability of obtaining a Grid resource
in less than 400 ms is very low (34 %), even if the same benchmark can take less
than 100 ms to complete. If you want to get your results as soon as possible, you are
adopting the Cloud end-user perspective. If you want to look for the optimum
resources that solve the problem, overcoming the boundaries of a single enterprise,
you are using the Grid perspective that aims at optimizing resources sharing and
system utilization. The integration of Cloud and Grid, or at least their integrated
utilization, has been proposed in [14] since there is a trade-off between application
turnaround and system utilization, and sometimes it is useful to choose the right
compromise between them. Some issues to be investigated have been pointed out:
Integration of virtualization into existing e-infrastructures; Deployment of Grid
services on top of virtual infrastructures; Integration of Cloud-base services in
242 A. Amato et al.
3 Problem Statement
The first technique we are going to integrate simulate the execution of a scientific
workload over a distributed cluster of heterogeneous computing resources.
HyperSim Simulator has been developed for the Grid environments [21]. The
chosen simulator is highly customizable which allows us to specify the details
quite well Grid environment we want to simulate. Another benefit of using this
simulator is the statistical information provides us with the results of each exe-
cution that will be useful in the comparison of the various solutions proposed in
this project for the “job-scheduling”. It is possible because repeatability of the
output for the same value of the initial parameters is guaranteed [22]. The aim is
to generate Grids of different sizes and characteristics using the simulator [20].
That would be very useful in practice for studying the efficiency of different types
of algorithms. For the sake of exemplification, we used the simulator to obtain
different grid size scenarios that are very useful to test the performance of heu-
ristics and meta-heuristics, such as genetic algorithms, for scheduling and resource
allocation in Grid systems. Four scenarios are considered, according to the grid
size (small: 32 hosts/512 tasks; average: 64 hosts/1024 tasks; large: 128
hosts/2048 tasks; very large: 256 hosts/4096 tasks). The web interface, available at
http://weboptserv.lsi.upc.edu/WEBGRID/ offers a simple and friendly way to
introduce step-by-step some parameters used for solving a problem instance. With
this application it is possible remotely execute several programs that solve the
problem specified. The scheduler is an important functional component of any
distributed system. In particular, schedulers are central to large-scale distributed
systems such as Grid systems. The purpose of the schedulers is to efficiently and
244 A. Amato et al.
The scheduling problem type, which can be solved by using this web application, is
defined as an Independent Job Scheduling problem, in which tasks are processed in
the batch mode. The main characteristics of this kind of scheduling in distributed
systems is the requirement over tasks, arranged in batches, to be executed inde-
pendently on the resources [19]. Independent scheduling is very suitable to address
in Grid systems especially in case of verification of the security assurance condi-
tion. The absence of dependencies among tasks makes it easier to pre-empty or
re-schedule tasks. The resource characteristics can be also better exploited due the
variation of independent tasks on the computation grain. The problem formulation
in this approach is based on the Expected Time to Compute (ETC) matrix model, in
which an instance is defined by the following input data:
• The workload vector, which defines the computational loads of the tasks in the
batch (usually in millions of instructions);
• second, MIPS);
• The estimation of the prior load of each available machine (expressed in the
terms of ready times of the machines);
• The ETC matrix, which defines the estimations of the times needed for the
completion tasks in machines (each ETC entry is defined for a given
task-machine pair). The size of the ETC matrix is (number tasks) × (number
machines).
There are two basic methods of the solution (schedules) representation in Grids,
namely direct representation and permutation-based representation.
In the direct representation each schedule is defined as the schedule vector x,
coordinates of which are the numbers of machines, to which the particular tasks are
assigned, i.e. x = [xi, …, xnumber tasks] and xi denotes the number of machine to
which task i is assigned. An example of the schedule for 4 machines and 9 tasks:
x = [2, 3, 1, 1, 3, 4, 2, 2, 1]. In permutation-based representation for each machine
there is a sequence of tasks assigned to that machine defined. The tasks in the
sequence are increasingly sorted with respect to their completion times. Then all
task sequences are concatenated into one global vector, which is in fact the per-
mutation of tasks to machines. In this representation some additional information
Brokering of Cloud Infrastructures Driven by Simulation … 245
about the numbers of tasks assigned to each machine is required (an additional
vector must be kept).
Using this web application there is possible to solve the scheduling problem
expressed as a problem of optimal resource utilization from the Grid users per-
spective under additional scheduling criteria: security and task abortion. The Grid
scheduling problem is formalized as a non-zero sum game of the Grid users, who
try to find the best assignment of their batch of tasks to resources. The users cost
functions are interpreted as the joint costs of the secure execution of their tasks, the
costs of possible task abortions (as the results of the machines unreliability and Grid
dynamics), and the costs of the utilization of resources. Then the game cost function
is minimized, at global and local (user) levels. The game cost function is defined as
an objective of the scheduling. To define the Grid users’ game the following setting
has to be specified:
• The number of Grid Users, which is the number of players in the game;
• The users’ tasks sets (pools), which are the players decision variables in the
game. The total number of tasks of all users is the total number of tasks in a
given batch;
• The users’ cost functions.
The cost functions defined for each Grid user are defined as the sum of four
following components:
• The tasks execution cost, which is calculated as an average completion time of
the player’s tasks on machines, to which they are allocated;
• The resource utilization cost, which is calculated for each Grid user as an
average idle time of machines on which his tasks are executed;
• The security cost, which is defined as an average wasted time in the result of
tasks failures, because of the high security requirements (the security assurance
condition is not satisfied);
• The task abortion cost, which is defined as an average wasted time in the result
of tasks abortion on machines, because of Grid dynamics or special policies of
the resource owners.
Each component of the players’ cost functions can be activated or not by the web
application user. It means that it is possible to compose several versions of the
players’ cost functions using the components necessary to solve the specified
problem.
The brokering problem consists of choosing the best proposal among the number of
offers, which have been received from different providers, who answer to the same
call [2]. To reach the decision about the best proposal, it is necessary to define
user’s requirements and goals that allow to create an evaluation criteria that con-
tains mandatory requirements, checks and evaluates multiple alternatives with
relative values so building complex weighted sum functions depending on criteria
derived from rules stated by a user [4].
The broker collects a number of proposals described in an vendor agnostic way
and chooses the best one(s) according to the brokering rules. The Call For Proposal
(CFP) is the document to be prepared by the customer to specify his requirements in
terms of the list of resources to be acquired and the rules/policies to be used for
defining resource brokering strategies.
As shown in Fig. 1, the CFP is composed of two sections. The first one is the
SLA Template described according to the XML SLA@SOI schema described in
Brokering of Cloud Infrastructures Driven by Simulation … 247
Fig. 1 Broker
[17]. The second section composing the CFP is the Broker Policy, containing a set
of rules, to be enforced by the brokering algorithm, in order to choose among the
different proposals offered by the Cloud market [2]. In particular the SLA template,
described in [3], is composed of Service Properties, that defines the technical
requirements for user’s applications; and the correspondent desired Service Levels,
such as availability, reliability, performance; (Terms of Service) that include the
contract duration, data location and billing frequency, etc.
Broker Policy sets constraints and objectives on multiple parameters such as the
best price per time unit, the greatest number of cores, the best accredited provider or
the minimum accepted availability [5]. As different proposals will come from Cloud
Vendors, the broker have the main task to choose the best proposal according to the
policies specified by the customer such as best price per time unit, maximum
amount of memory, service availability and so on. In order to consistently develop a
Cloud service broker, we propose a model to formulate the application requirements
into constraints that can be architectural constraints and service level constraints
and that can be divided into hard constraints and soft constraints. User selects
properties, which characterize the specific class of chosen service; service levels in
terms of performance, availability, etc.; the cost that he intends to pay for; the
accreditation of the provider, which represents its reputation measured by the
feedback of other users or by some rating agency. For each parameter the user
eventually chooses some constraints, defines if they have to be hard or soft and
specifies none or more objective functions to be optimized. The rules are chosen by
selecting the SLA parameters and setting the required options using a friendly
graphic interface.
Simple constraint rules are in Table 1.
Of course not every constraint can be applied to any SLA parameters. Given a
set of constraints, it is possible that there are several contrasting objectives (e.g. the
minimization of the cost, and maximization of the resources) so it is necessary a
248 A. Amato et al.
multi-objective approach to find the Pareto front (that is a set of all those solutions
that are considered to be optimal in multi-criteria optimization). After that, a pos-
teriori approach is used that deliver to the user the set of Pareto-optimal solutions
among which the user will choose the preferred one [1]. Nevertheless, in order to
simplify the usage of the brokering service we allow for grouping multiple
objectives according to the kind of SLA parameter Service Properties, Terms of
Services or Service Levels. We also define the Provider Reputation as an additional
brokering parameter, that is out of the SLA Template, but it is known to the broker.
To compute the overall score we map the domain of each SLA parameter and we
allow to assign a percentage relevance to each category.
6 Integrated Approach
Our methodology will work in two steps: Simulation and Brokering. In the
Simulation step the user needs to describe the application workload in terms of
statistic characterization of jobs inter-arrival and number of instructions. It needs
also to set the final objective in terms of job completion time or queue time. In the
Brokering step it needs to map the abstract resources configured by the simulation
tool to Cloud virtual resources. The broker will take in input number and typology
of such resources, and eventually other constraints and objectives. The output will
be a set of alternatives of Cloud provisioning by heterogeneous providers.
We considered two different options:
• Brokering after Simulation. In this case the user will execute many simulation
runs changing the scheduling strategy, the number and the kind of computing
resources till when the resulting performance satisfies the requirements. The best
configuration of computing infrastructure will be used as input of the brokering
step. The simulation is used to refine the brokering constraints and to reduce the
complexity of brokering. This solution is preferred for Grid users. In fact users’
skill allows to define the optimal computing infrastructure in terms of Grid
resources. Brokering result consists of the best virtualization of the Grid
infrastructure by the available public Cloud services.
• Simulation after Brokering. In this case the user will refine the brokering con-
straints and objective till when there are Cloud proposals that satisfy his
Brokering of Cloud Infrastructures Driven by Simulation … 249
requirements. In the second step the user maps the brokered services to abstract
Grid resources in order to simulate the application workload over the available
proposals. The simulation result can be used to resolve the uncertainty about the
brokering results, which can all belong to the Pareto fronts of equivalent optimal
solutions. This alternative is specially conceived for Cloud users, who better
know the Cloud market and are able to define the brokering requirements better
than configuring a Grid infrastructure.
In both the alternatives the first step is always the more critical one. In fact
results are affected by the user’s expertise and the space of solution is greater than
in the second step.
7 Conclusion
Computational grids are the de facto computing paradigm for large-scale scientific
distributed computation. However the availability of Cloud services delivered by a
pay per use business model provides the opportunity of replacing physical resources
with virtual ones. Scientific workloads can be run according to a Grid on Cloud
approach that complements the Grid strengths with the elasticity of Cloud. We
proposed a methodology that support the user during the configuration and the
provisioning of the computing infrastructure by the integrated utilization of two
different techniques and tool. A Grid simulation tool is used to configure the
number and the kind of resources that optimize the execution of the scientific
workload. A brokering tool supports the resource provisioning by the selection of
the providers that optimize the user’s requirements. Future work will include
experiments and simulations in order to validate the integrated utilization of the two
different techniques and tool.
References
1. Amato, A., Di Martino, B., Venticinque, S.: Agents based multi-criteria decision aid.
J. Ambient Intell. Humaniz. Comput. 5(5), 747–758 (2014)
2. Amato, A., Liccardo, L., Rak, M., Venticinque, S.: Sla negotiation and brokering for sky
computing. In: CLOSER, pp. 611–620 (2012)
3. Amato, A., Venticinque, S.: Multi-objective decision support for brokering of cloud Sla. In:
The 27th IEEE International Conference on Advanced Information Networking and
Applications (AINA-2013). IEEE Computer Society, Barcelona, Spain, 25–28 Mar 2013
4. Amato, A., Venticinque, S.: Modeling, design and evaluation of multi-objective cloud brokering.
Int. J. Web Grid Serv. 11(1), 21–38 (2015). http://dx.doi.org/10.1504/IJWGS.2015.067163
5. Amato, A., Venticinque, S., Di Martino, B.: Evaluation and brokering of service level
agreements for negotiation of cloud infrastructures. In: ICITST, pp. 144–149 (2012)
6. Carrera, D., Steinder, M., Whalley, I., Torres, J., Ayguadè, E.: Enabling resource sharing
between transactional and batch workloads using dynamic application placement. In:
250 A. Amato et al.
Abstract A huge amount of data is created recently in digital forms. Due to the
frequent technological changes and developments that are happening, organisations
need to constantly match with market changes. Therefore they need to develop
dynamic capabilities based on digital data, in order to reach valuable outputs.
Specifically, this study examines whether the development of the Digital Data
Genesis dynamic capability in firms leads to valuable outputs: data quality and data
accessibility. We empirically test our model using a questionnaire-based survey
answered by 125 sales managers. Results suggest that firms able to develop
dynamic capabilities based on digital data obtain higher outputs in terms of data
quality and accessibility. Managerial implications of our results are finally offered.
The authors acknowledge the support of the European Community through a Marie Curie
Intra-European Fellowship for providing funds to one author of the paper; the authors also
acknowledge the support of France’s Rhône Alpes region (http://www.rhonealpes.fr/).
1 Introduction
A huge amount of data is created in digital forms every day [1]. Managers have the
opportunity to measure, and hence know, radically more about their businesses,
their customers’ tastes, and needs, by analyzing digital data. Explaining whether
and how leveraging on the capability of exploiting digital data can be a way for
firms to achieve success and higher outputs is becoming an evergreen issue in
management and Information Systems (IS) fields.
Previous studies have conceptualized various types of capabilities, categorizing
them in generic, organizational, ordinary, dynamic, heterogeneous, and homoge-
neous [2]. However, since nowadays market changes occur very quickly, focusing
on the development of dynamic capabilities at firm level, based on the exploitation
of digital data, is becoming even more important [3]. Therefore, in this article, we
seek to contribute to the emerging literature on Information Technology
(IT) dynamic capabilities investigating their linkage with possible outputs, as data
quality [4] and data accessibility [5]. In so doing, we innovate in the choice of the
dynamic capability object of our study: Digital Data Genesis (DDG). We define
DDG as the coming into being of digital data. Specifically, DDG represents the
naissance of digital data: it is a phenomenon (an observable fact or event) that
involves the direct generation of new data in digital form, and takes place when
information representative of a physical action, event or condition is created digi-
tally concurrently with the event taking place. DDG thus enables real time digital
representations of objects and events—so that these objects and events can exist as
symbolic representations that can interact and be manipulated in the information
space. For example, when a waiter takes an order using a palm device, an infor-
mational representation of the customer wishes is created in real-time in digital
form.
Thus, since dynamic capabilities allow organizations to reconfigure organiza-
tional capabilities in response to changes in the business environment, and since
data is a precursor to many organizational processes, we decided to study DDG
dynamic capability and their output at firm level.
Recently, more than before, organisations need to constantly match market changes
by developing dynamic capabilities, defined as “the firm’s processes that use
resources—specifically the processes to integrate, reconfigure, gain and release
resources—to match and even create market change” [6]. Thus, dynamic capabil-
ities have the potential to create, to evolve and to recombine internal existing
resources to allow the firm to adapt continuously to changes [7]. This adaptability
has been argued as offering improved customer value [8], and is especially required
in fast-paced technological environments [9].
Investigating the Impact of Digital Data … 253
Also information quality is important because when sources are equally acces-
sible, individuals will consistently choose and use sources that are perceived of
higher quality [12, 15]. Information Accuracy, Completeness and Currency are
dimensions of the quality of the information retrieved from an information system
[4, 16]. Accuracy refers to the degree to which information correct, unambiguous,
meaningful, believable, and consistent. Completeness is about the degree to which
all possible states relevant to the user population are represented in the stored
information. While currency concerns the degree to which information is up-to-date
and precisely reflecting the current state of the world that it represents.
Harrah’s corporation appreciates the quality and accessibility of the collected
data on customers at the slot machines. For example, basing on the accessibility,
accuracy, completeness and currency of the accumulated transactional data from
past guests, Harrah’s can quickly estimate the customer’s future value within
minutes of the player joining the program. This enables the casino to start treating
the customer according to his or her future value rather than having to wait for
observed play before starting to provide rewards [17].
Based on these considerations, the hypotheses we propose are following listed:
H1: The development of high DDG dynamic capability will positively influence the
data accessibility.
H2: The development of high DDG dynamic capability will positively influence the
data quality (Fig. 1).
DDG output
Data
DDG dynamic capability accessibility
H1
Choosing IT Data quality
Data
DDG dynamic H2 accuracy
Integrating IT
capability
Data
Managing completeness
Digital Data
Data
Reconfiguring currency
Control variables
• Firm size
• Firm age
• Firm industry
3 Methodology
previously done so. We also said that we would provide the results of the study to
those who completed the questionnaire. 125 questionnaires from different organi-
sations (an overall response rate equal to 21 %) were analysed. Such a high
response rate is a valuable result since it is uncommon in survey research [20].
3.2 Measurement
All the research variables that constitute the DDG dynamic capability were mea-
sured using multi-item Likert scales from 1 (not at all) to 7 (to a large extent) based
on prior empirical research (Table 1) with the exception of the “Choosing IT”
construct, which we empirically tested directly through our pilot study.
Table 1 Survey items for testing the model
Construct Item Survey questions Source
Choosing IT CIT1 Our sales personnel have effective methods for [19]
(CIT) digital data generation choices
CIT2 Digital data generation choices make their case for [19]
our sales process
Integrating IT IIT1 The integration of digital data into the enterprise [21]
(IIT) processes makes our sales personnel more effective
IIT2 Digital data generation is successfully integrated into [21]
our sales processes
Managing MDD1 Our sales personnel effectively handle the digital [22]
digital data data that they obtain
(MDD) MDD2 Our sales personnel effectively process the data [22]
obtained in digital form
MDD3 Our sales personnel have effective methods for [22]
managing the digital data that they obtain
Reconfiguring REC1 When our digital data generation must evolve, our [23]
(REC) sales personnel successfully steer its evolution
REC2 When our digital data generation must evolve, our [23]
sales personnel effectively lead its reorganisation
Data accuracy AC1a Our digital data are incorrect [4]
(AC) AC2 Our digital data contain very few errors [4]
AC2 Our digital data are accurate [4]
Data CO1a Our digital data are incomplete [4]
completeness CO2 Our digital data are comprehensive [4]
(CO)
CO3 Our digital data cover all our data needs [4]
Data currency CU1 Our digital data are recent [4]
(CU) CU2 Our digital data are up-to-date [4]
CU3a Our digital data are obsolete [4]
Data AE1 Our digital data are rapidly available to our Sales [5]
accessibility personnel
(AE) AE2 Our digital data are easily obtainable for our Sales [5]
personnel
a
The variable was reversed while computing the final factor
Investigating the Impact of Digital Data … 257
“Integrating IT” adapts the ability to integrate IT solutions into business pro-
cesses [21]. “Managing digital data” adapts the information-management dimen-
sion of the information capability measurement scale [22] to measure the ability to
manage digital data. “Reconfiguring” adapts the reconfigurability measurement
scale [23] to estimate the potential to reconfigure DDG dynamic capability. The
final construct, DDG dynamic capability, was measured as a second-order construct
based on the four components of DDG dynamic capability, each of which was
compounded as the mean of the related items.
Looking at the outputs, data quality was measured through three variables: data
accuracy, data completeness, and data currency [4]. Instead, data accessibility was
based on the measure proposed by Zimmer et al. [5].
We introduced also control variables in the models: firm size (number of
employees), firm age (number of years since each company was founded), and firm
industry (four dummies for the four industries following list: traditional manufac-
turing, high-tech manufacturing, material services, and information services).
We employed SmartPLS and SPSS software for our data analysis. We choose PLS
in SmartPLS as “the most accepted variance-based structural equation modelling
technique because it can accommodate models that combine formative and
reflective constructs” [24, p. 1342]. The PLS path modelling technique with
reflective indicators in Smart-PLS was used to assess the validity and reliability of
the data [25] complemented with SPSS calculations. This approach was better
equipped to handle formative measures [26, 27]. Modelling moderating relation-
ships in PLS required adding moderating variables as direct relationships to out-
come variables and then calculating interaction variables based on the predictor
variables.
The sample of our study was balanced. Specifically, the companies surveyed cover
four industry groups [25] and were almost homogeneously distributed. The
majority of the surveyed companies are between 11 and 20 years of age, with the
oldest at 77 years old. Also in terms of the countries in which firms operate, the
sample is balanced. Finally, the sales-manager respondents are primarily
sales-department directors.
258 E. Raguseo et al.
levels for reliability (measured by composite reliability and Cronbach’s alpha) and
average variance extracted (AVE). Nunnally [28] suggests a value of 0.70 as a
benchmark for modest composite reliability. Churchill [29] suggests that a
Cronbach’s alpha value of 0.6 is acceptable. Bagozzi and Yi [30] suggest that AVE
must be higher than 0.50.
In this study, factor loadings, composite reliability and AVE values were gen-
erated as a part of the SmartPLS output. Using SPPS18, the Cronbach’s alpha
scores were computed. The composite reliability (CR) of all constructs range from
0.824 to 0.946, Cronbach’s alphas ranged from 0.604 to 0.909, and AVE ranged
from 0.615 to 0.898, all acceptable results because they are of higher values than
the acceptable thresholds. These results demonstrate convergent validity in the
measurement model.
The square root of average variance extracted for each construct was compared
with the correlations between it and other constructs [31]. Each construct shared
greater variance with its own measurement items than with constructs having dif-
ferent measurement items. Therefore, discriminant validity was also supported.
5 Discussions
The study results highlight that the theorization of DDG dynamic capability, as a
fourfold organizational process, is supported by the empirical data analysis. More
importantly, DDG dynamic capability aims at outputting accessible, accurate,
complete and current digital data. The data analysis confirms that DDG dynamic
capability releases information resources of higher quality and of higher accessi-
bility. Hence, this better output could be levered to match or create market changes,
as expected by dynamic capabilities [6]. Thus, DDG dynamic capability could
potentially create, as follow-up, significant data output.
DDG dynamic capability makes the information more accessible, hence easily
available for use. As far as, information accessibility is the most important driver
for information source selection for use, the quality of the digital data coming out
from the DDG are at stake. Otherwise low quality but easily accessible digital data
would make the worst combination [5, 10, 12]. Notwithstanding, DDG dynamic
capability increases also the accuracy, the completeness and the currency of the
digital data. Hence, in synthesis DDG dynamic capability delivers higher quality
data.
6 Conclusions
The IS literature provided scant empirical studies that investigate the relationship
between the development of dynamic capabilities based on digital data, and their
output. By analysing a sample of 125 companies, our findings add empirical evi-
dence to the claim that DDG capability is associated with several outputs as data
quality and data accessibility. DDG capabilities make digital data more accessible
and of higher quality to the organisation’s personnel. Specifically, the development
of a DDG dynamic capability enables companies to dispose of more accurate data,
as fewer errors, of more complete data, since data become more comprehensive and
not inconsistent, more current. This happened because thanks to the continuous
ability of generate digital data, these are recent, up-to-date and not obsolete. They
are also more accessible, since digital data are promptly and easily available to sales
personnel. In this way, companies dispose of updated data about their customers
and can take advantage from timely and qualified information about their
customers.
Understanding the effect of DDG dynamic capabilities on data quality and data
accessibility has important managerial implications. First, managers could suc-
cessfully exploit their DDG dynamic capability to develop data-based strategic
initiatives, comforted by their high data quality and data accessibility. Second,
managers of firms should become more aware about the potentiality that the usage
of digital data can have on their business activities and should invest more in the
capability of using digital data.
Investigating the Impact of Digital Data … 261
Our study has also some limitations that will be considered for future studies. First,
a distinction between the effects investigated in information-intensive industries and
non-information-intensive industries may be dissimilar because of a difference in the
importance of information to their business. Second, we could not consider the lon-
gitudinal aspect of the development of DDG dynamic capability on data quality and
data accessibility due to insufficient data. The lagged effect could be larger than the
immediate one.
Future studies are needed to investigate our understanding of how business DDG
dynamic capability can impact differently according to the considered industry.
Furthermore, longitudinal studies should be conducted to understand how long time
span influences the causal relationships investigated.
References
1. McAfee, A., Brynjolfsson, E., Davenport, T.H., Patil, D.J., Barton, D.: Big data. The
management revolution. Harv. Bus. Rev. 90(10), 61–67 (2012)
2. Drnevich, P.L., Kriauciunas, A.P.: Clarifying the conditions and limits of the contributions of
ordinary and dynamic capabilities to relative firm performance. Strateg. Manag. J. 32(3), 254–
279 (2011)
3. Raguseo, E., Vitari, C.: The development of the DDG-capability: an evaluation of its impact
on firm financial performance. Smart organizations need smart artifacts: fostering interactions
between people, technologies, and processes. Springer series, Lecture Notes in Information
Systems and Organisation (LNISO), vol. 7, pp. 97–104 (2014)
4. Nelson, R.R., Todd, P.A., Wixom, B.H.: Antecedents of information and system quality: an
empirical examination within the context of data warehousing. J. Manag. Inf. Syst. 21(4), 199–
235 (2005)
5. Zimmer, J.C., Henry, R.M., Butler, B.S.: Determinants of the use of relational and
non-relational information sources. J. Manag. Inf. Syst. 24(3), 297–331 (2007)
6. Dale Stoel, M., Muhanna, W.A.: IT capabilities and firm performance: a contingency analysis
of the role of industry and IT capability type. Inf. Manag. 46(3), 181–189 (2009)
7. Li, M., Ye, L.R.: Information technology and firm performance: linking with environmental,
strategic and managerial contexts. Inf. Manag. 35(1), 43–51 (1999)
8. Zhou, K.Z., Wu, F.: Technological capability, strategic flexibility, and product innovation.
Strateg. Manag. J. 31(5), 547–561 (2010)
9. Wang, E.T., Hu, H.F., Hu, P.J.H.: Examining the role of information technology in cultivating
firms’ dynamic marketing capabilities. Inf. Manag. 50(6), 336–343 (2013)
10. Galy, E., Sauceda, M.J.: Post-implementation practices of ERP systems and their relationship
to financial performance. Inf. Manag. 51(3), 310–319 (2014)
11. Culnan, M.J.: Environmental scanning: the effects of task complexity and source accessibility
on information gathering behavior. Decis. Sci. 14(2), 194–206 (1983)
12. O’Reilly, C.A.: Variations in decision makers’ use of information sources: the impact of
quality and accessibility of information. Acad. Manag. J. 25(4), 756–771 (1982)
13. Hirsh, S., Dinkelacker, J.: Seeking information in order to produce information: an empirical
study at Hewlett Packard Labs. J. Am. Soc. Inform. Sci. Technol. 55(9), 807–817 (2004)
14. Piccinini, G., Scarantino, A.: Computation vs. information processing: why their difference
matters to cognitive science. Stud. Hist. Philos. Sci. Part A 41(3), 237–246 (2010)
15. Davenport, T.H., Harris, J.G.: Competing on Analytics: The New Science of Winning.
Harvard Business Press, Boston (2007)
262 E. Raguseo et al.
16. DeLone, W.H., McLean, E.R.: Information systems success: the quest for the dependent
variable. Inf. Syst. Res. 3(1), 60–95 (1992)
17. Piccoli, G., Watson, R.T.: Profit from customer data by identifying strategic opportunities and
adopting the ‘Born digital’ approach. MIS Q. Executive 7, 113–122 (2008)
18. Li, T., van Heck, E., Vervest, P.: Information capability and value creation strategy: advancing
revenue management through mobile ticketing technologies. Eur. J. Inf. Syst. 18, 38–51
(2009)
19. Williams, M.L.: Identifying the Organizational Routines in NEBIC Theory’s Choosing
Capability. HICCS, Hawaii (2003)
20. Cycyota, C.S., Harrison, D.A.: What (not) to expect when surveying executives a
meta-analysis of top manager response rates and techniques over time. Organ. Res.
Methods 9(2), 133–160 (2006)
21. Bharadwaj, A., Sambamurthy, V., Zmud, R.: IT Capabilities: Theoretical Perspectives and
Empirical Operationalization. ICIS (1999)
22. Marchand, D.A., Kettinger, W.J., Rollins, J.D.: Information Orientation: The Link to Business
Performance. Oxford University Press, New York (2002)
23. Pavlou, P.A., El Sawy, O.A.: From IT leveraging competence to competitive advantage in
turbulent environments: the case of new product development. Inf. Syst. Res. 17(3), 198–227
(2006)
24. Gruber, M., Heinemann, G., Brettel, M., Hungeling, S.: Configurations of resources and
capabilities and their performance implications: an exploratory study on technology ventures.
Strateg. Manag. J. 31(12), 1337–1356 (2010)
25. Ringle, C.M., Wende, S., Will, A.: SmartPLS release: 2.0 (beta). SmartPLS, Hamburg,
Germany (2005)
26. Chin, W.W., Marcolin, B.L., Newsted, P.R.: A partial least squares latent variable modeling
approach for measuring interaction effects: results from a Monte Carlo simulation study and
electronic-mail emotion/adoption study. Inf. Syst. Res. 14(2), 189–217 (2003)
27. Diamantopoulos, A., Riefler, P., Roth, K.P.: Advancing formative measurement models.
J. Bus. Res. 61(12), 1203–1218 (2008)
28. Nunnally, J.C.: Psychometric Theory, 2nd edn. McGraw-Hill, New York (1978)
29. Churchill, G.A.: A paradigm for developing better measures of marketing constructs. J. Mark.
Res. 16(February), 64–73 (1979)
30. Bagozzi, R.P., Yi, Y.: On the use of structural equation models in experimental designs.
J. Mark. Res. 271–284 (1988)
31. Fornell, C., Larcker, D.F.: Structural equation models with unobservable variables and
measurement error: algebra and statistics. J. Mark. Res. 382–388 (1981)
An Ecological Model for Digital Platforms
Maintenance and Evolution
Keywords Software maintenance Wakeby Digital platform Complex systems
This paper has been awarded the “Special Award Sandro D’Atri” at the XI Conference of the
Italian Chapter of AIS held in Genova (IT) on November 21st–22nd 2014.
1 Introduction
2 Related Works
3 Research Strategy
Empirical data were collected through direct contact with the head of the mainte-
nance team that gently provided us with archival data on software bugs and fixes,
information on the maintenance process, technical documentation, and commercial
information. A dataset with more than 2,200 defect reports over a four year period
represents the main source of data on which the following analysis is based.
Our study investigates changes to the four releases (or versions) of XYZ: B.1;
B.2; B.2.1, B.2.2. Release B.1 derives from a product developed by a company
acquired by our focal software vendor, which was delivered without further
changes. Later the focal software vendor made significant economical investments
and release B.2 derives from an effort to optimize and improve XYZ; B.2.2 rep-
resents a second major enhancement, based partially on customer feedback.
Users of XYZ who find unexpected behavior such as adverse incidents and bugs,
write requests for change (RFC) thus the acronym RFC will be used interchange-
ably with “defect” or “error” in this paper. An RFC does not demand functional
An Ecological Model for Digital Platforms … 267
changes; users raise another type of request, which we shall call “suggestions
(SUG)”, in order to vary or add a function. A single SUG proposes new or modified
functionalities.
Defects and suggestions are recorded in a special database. The data is captured
and grouped according to releases. Each release is maintained as an independent
entity; thus some failures can repeat and others are unique to a release.
Age and severity are the attributes of RFCs adopted in our statistical analysis.
The severity of a RFC denotes the impact of the corresponding error, which can be
in one of the following categories:
• Severity 1: Critical Impact—A software component which is critical for busi-
ness does not operate; or an absolutely necessary interface has failed; or an
operator is unable to use XYZ resulting in a critical impact on operations. This
condition requires an immediate solution.
• Severity 2: Significant Impact—A software component is severely restricted in
its use, causing significant business impact. This indicates that XYZ is usable
but is strongly limited.
• Severity 3: Moderate Impact—A non-critical software component is malfunc-
tioning, causing moderate business impact. This indicates the program is usable
with less significant features.
• Severity 4: Minimal Impact—A non-critical software component is malfunc-
tioning, causing minimal impact, or a non-technical request is made.
Age provides the concise and precise account of the efforts expended to
implement a change. ‘Age’ is usually called ‘time to repair (TTR)’ in current
literature and is surveyed in a variety of technical fields.
The historical data from four different releases of a XYZ are used to illustrate
how each release has evolved over time. Furthermore use time series analysis
techniques for identifying patterns in these data. Time series models assume that
events are correlated over time and the impact of other factors is progressively
captured in historical archives.
2008 and contributed to the enhanced version B.2.2 as noted earlier. Most of SUG
(=127) have been closed during the time window of submitting recommendations,
in this way the suggestions contributed to improve and to add new functions to
releases B.2.1 and B.2.2.
Table 1 presents the start dates of the maintenance process for each release,
which is taken to be the date of the first defect being raised. The final date is taken
to be 30th September 2011, when data collection for this paper was closed. The
parameter A in Table 1 indicates the temporal range starting with the first opened
RFC and the 30th of September; the parameter B is the distance between the first
and the last opened RFC. Thus our study on various releases covers different
periods of time: the examination of release B.1 exceeds 4 years while the study of
B.2.2 covers about 2 years and half. We decided to close our survey on the 30th of
September and in this day the last RFC of B.2.2 was opened due to occasional
reason that’s why A and B coincide. Obviously, in the interests of consistency, we
considered the number of RFC submitted over the first 730 days (=2 years) after
each version has been released (Table 2); moreover we report the number of defects
which required more than 1 year for resolution, the number of severity-1 defects
and the percentage of these defects that have been closed after 30 days. All the
releases have some RFCs with zero age (that is, the number of days spent to fix a
RFC). This may indicate one of the following situations:
• A false problem was reported;
• The problem was trivial and immediately closed;
• The problem had already been addressed at the time the RFC was raised.
We notice that release B.1 has the highest number of submitted defects in the
first couple of years, the highest number of defects with age exceeding 1 year and
the highest number of severity-1 defects (Table 2).
Table 3 illustrate the increase in the size of XYZ—executable version—after
each upgrade. Generally speaking, the size of a release in megabytes can be taken to
mirror the complexity of the release’s functionality. Releases B.2 and B.2.2 are
much larger than their predecessors.
We note that B.2, B.2.1 and B.2.2 have the lowest number of defects (Table 2).
These measures indicate the higher quality of the last releases respect to B.1, and
match with the brief history of XYZ outlined in Sect. 2. Actually B.2, B.2.1 and
B.2.2 were driven by more organized and focused development efforts, instead B.1
was adopted in a cursory manner. In the present context, one reasonably concludes
that this can be a reflection of unsatisfactory development resulting in increased
maintenance efforts.
An Ecological Model for Digital Platforms … 269
Defects are managed by a complex structure that basically includes four teams as
follows:
• First Level Team—This group analyzes the issues and addresses the problems
related to user errors or basic configurations when possible, otherwise involves
the Second Level Team. The responses of this level are fast but do not go deep
into problems.
• Second Level Team—It works with customers face to face to resolve RFCs.
Level 2 provides the customer with a solution. If, and only if, customer is
satisfied with the solution, Level 2 can close the procedure of fixing.
• Third Level Team—This level is responsible for resolving severe errors, cre-
ating fixes and making them available to users. This team assists customers in
the diagnosis of reported problems that may be product defects and for making
changes to released products in response to a RFC. This process governs the
support of the product releases from assistance request by Level 2; it recognizes
a valid problem through code changes, testing and then delivering of a fix for the
detected error.
• Development Team—This level is responsible for new features that will be
included in next product releases. In some cases provides help to Level 3 for hot
customers issues or to evaluate possible enhancements request.
The overall organization of teams is summed up in Fig. 1. It is possible to
identify two areas in the chain of operations. The ‘front-end’ includes the support
teams (Level 1 and Level 2) that have direct contacts with the client; the ‘back-end’,
with the teams working on the problem resolution does not have a direct interaction
with users.
The author of a RFC is required to describe the malfunctioning he or she expe-
rienced and to summarize the symptoms according to the list in Table 4. However
over 80 % of records mention the common symptom ‘program defect’. Users appear
to provide the most generic description of the problem. The frequency of symptom
#19 becomes lower when defects are serious. Users make a certain effort to scruti-
nize severity-1 failures. However the non-trivial percentage of symptom #19
(77.3 %) points out that this effort is not so great. So the lack of precision in
describing the problem by the users has less to do with the effort required, and more
influenced by users’ attitude towards reporting less than critical errors.
We have selected four distributions of age by severity and calculated nine statistical
parameters of each distribution (Table 5). The kurtosis says that the severity-2 ages
and the severity-4 ages have distributions with a lower, wider peak around the mean;
on the other hand severity-1 and severity-3 ages show rather leptokurtic shapes. The
age mean—named mean time to repair (MTTR) in current literature—diminishes
through the groups 2, 3 and 4. Also the 50th percentile, which is the median,
decreases from severity 2 to 3. Note that the mean and the median have been com-
puted over the entire populations and not over a sample; their trends indicate that the
effort to handle a change reduces as the severity of the defect lessens. The age mirrors
the progressively reduced complexity of defects from 2 to 4, but the severity-1
problems have the lowest age mean, the lowest median and even the lowest standard
deviation. This surprising result can be explained in the following manner.
An expert usually handles a RFC with severity 2, 3 or 4 but service level agree-
ments warrant that a severity-1 problem must be resolved within 1 month (30 days).
Thus the management needs to allocate more skilled personnel to close the most
severe errors within this deadline. As we learnt, two, three or more experts work
around this kind of errors and the age-mean is the lowest in the leftmost column of
Table 4. However 80 % of age in group-1 largely exceeds 30 days (Table 2), which
means the teams which handle severity-1 problems usually miss their deadlines.
It is generally observed that users normally detect several defects soon after the
product is released, and with time the number of opened RFCs comes down. We
posited that studying the distribution of defects over time could reveal some pattern
and regularity. The discussion in this section outlines our quest for a statistical law
of defect-emergence.
We examined the temporal series of defects discovered for B.1, B.2, B.2.1 and
B.2.2 releases to find the best description of these series. We performed the
Kolmogorov–Smirnov test and observed that the four series of data fit with the
Wakeby (WAK) distribution when the Kolmogorov–Smirnov test is accepted at the
99 % significance level. Table 7 shows the fitness parameters of the tests: D
(= statistic), P (= probability-value) and R (= rank). At right side Table 7 exhibits
the most suitable values of the Wakeby distributions. As R equals to 1 the Wakeby
model represents the best fit respect to the other 39 distributions although the
temporal series exhibit very different profiles. Figures from 2, 3, 4 and 5 plot the
probability density functions regarding releases from B.1 to B.2.2. Each PDF plots
the dates when the defects occurred during the range B (see Table 1). The dates
have been grouped in order to execute the Kolmogorov–Smirnov test and at the far
right of Table 7 one can find the size of the bars plotted in Figs. 2, 3, 4 and 5. This
size is expressed in days.
5 Discussion
The analysis of the defects’ time series conducted on the four versions of the
middleware product offers insights that have implications for both research and
practice. The first result is a confirmation of the contingent relationship between
272 P. Rocchi et al.
Fig. 2 PDF of 838 RFCs opened in the arch of 1471 days (Release B.1)
Fig. 3 PDF of 322 RFCs opened in the arch of 1074 days (Release B.2)
Fig. 4 PDF of 495 RFCs opened in the arch of 975 days (Release B.2.1)
Fig. 5 PDF of 593 RFCs opened in the arch of 939 days (Release B.2.2)
6 Conclusion
This research contributes to the design of new managerial practices for coping with
the evolution of digital platforms. These practices, grounded in the continuous
maintenance paradigm, can be informed by new explanatory and predictive theories
derived from the analysis of empirical data.
Further empirical studies on these lines are necessary for strengthening the
external validity of our results. For instance the same statistical analysis can be
repeated on defects data taken from public sources (i.e. open source projects) or
other proprietary software packages.
An Ecological Model for Digital Platforms … 275
Appendices
Table 4 (continued)
Symptoms Severity → 1 2 3 4 Total
16 Function It seems necessary to 5 36 17 1 59 (2 %)
needed perfect or to add a new
operation to XYZ
17 Build failed An error occurs during – 49 14 2 65 (3 %)
the compilation of
XYZ and/or the linker
phase
18 Incorrect I/O Malfunctions occur 9 78 46 1 134 (6 %)
during an i/o operation
e.g. XYZ displays a
panel
19 Program A program error occurs 89 912 762 78 1841 (81 %)
defect
References
1. Resca, A., Za, S., Spagnoletti, P.: Digital platforms as sources for organizational and strategic
transformation: a case study of the Midblue project. J. Theor. Appl. e-Commerce Res. 8, 71–
84 (2013)
2. Spagnoletti, P., Resca, A., Lee, G.: A design theory for digital platforms supporting online
communities: a multiple case study. J. Inf. Technol. 1–17 (2015)
3. Hanseth, O., Lyytinen, K.: Design theory for dynamic complexity in information
infrastructures: the case of building internet. J. Inf. Technol. 25, 1–19 (2010)
4. Marsden, C.T.: Net Neutrality: Towards a Co-regulatory Solution. Bloomsbury Academic,
London (2013)
5. Zittrain, J.: The generative internet. Harv. Law Rev. 119, 1975–2040 (2006)
6. Rossignoli, C., Zardini, A., Benetollo, P.: The process of digitalisation in radiology as a lever
for organisational change: the case of the Academic Integrated Hospital of Verona. DSS
2.0-Supporting Decision Making With New Technologies, p. 261 (2014)
7. Vom Brocke, J., Braccini, A.M., Sonnenberg, C., Spagnoletti, P.: Living IT infrastructures—
an ontology-based approach to aligning IT infrastructure capacity and business needs. Int.
J. Account. Inf. Syst. 15, 246–274 (2014)
8. Boudreau, K.J.: Let a thousand flowers bloom? An early look at large numbers of software app
developers and patterns of innovation. Organ. Sci. 23, 1409–1427 (2011)
9. Vom Brocke, J., Simons, A., Sonnemberg, C., Agostini, P.L., Zardini, A.: Value assessment of
enterprise content management systems: a process-oriented approach. In: D’Atri, A., Saccà, D.
(eds.) Information Systems: People, Organizations, Institutions, and Technologies, pp. 131–
138. Physica-Verlag, Heidelberg (2010)
10. Magni, M., Provera, B., Proserpio, L.: Individual attitude toward improvisation in information
systems development. Behav. Inf. Technol. 29, 245–255 (2010)
280 P. Rocchi et al.
11. Lehman, M.M., Ramil, J.F.: Rules and tools for software evolution planning and management.
Ann. Softw. Eng. 11, 15–44 (2001)
12. Gawer, A.: Platforms, Markets and Innovation. Edward Elgar Publishing, Cheltenham (2009)
13. Sorrentino, M., Virili, F.: Web services and value generation in the public sector. Electron.
Gov. 489–495 (2004)
14. Spagnoletti, P., Resca, A.: A design theory for IT supporting online communities. In:
Proceedings of the 45th Hawaii International Conference on System Sciences, pp. 4082–4091
(2012)
15. Williams, R., Pollock, N.: Software and Organisations—The Biography of the
Enterprise-Wide System or How SAP Conquered the World. Routledge, London (2008)
16. Vitari, C., Piccoli, G., Mola, L., Rossignoli, C.: Antecedents of IT dynamic capabilities in the
context of the digital data genesis. In: ECIS 2012: The 20th European Conference on
Information Systems (2012)
17. Spagnoletti, P., Federici, T.: Exploring the interplay between FLOSS adoption and
organizational innovation. Commun. Assoc. Inf. Syst. 29, 279–298 (2011)
18. Yoo, Y., Boland, R.J., Lyytinen, K., Majchrzak, A.: Organizing for innovation in the digitized
world. Organ. Sci. 23, 1398–1408 (2012)
19. Truex, D., Baskerville, R., Klein, H.: Growing systems in emergent organizations. Commun.
ACM 42, 117–123 (1999)
20. Ramesh, B., Cao, L., Baskerville, R.: Agile requirements engineering practices and challenges:
an empirical study. Inf. Syst. J. 20, 449–480 (2007)
21. Lee, G., Xia, W.: Toward agile: an integrated analysis of quantitative and qualitative field data
on software development agility. MIS Q. 34, 87–114 (2010)
22. Pino, F.J., Ruiz, F., Garcia, F., Piattini, M.: A software maintenance methodology for small
organizations : Agile MANTEMA. J. Softw. Maint. Evol. Res. Pract. 24, 851–876 (2012)
23. Subramanyam, R., Ramasubbu, N., Krishnan, M.: In search of efficient flexibility: effects of
software component granularity on development effort, defects, and customization effort. Inf.
Syst. Res. 23, 787–803 (2012)
24. Hirt, S.G., Swanson, E.B.: Emergent maintenance of ERP: new roles and relationships.
J. Softw. Maint. Evol. Res. Pract. 13, 373–387 (2001)
25. Caporarello, L., Viachka, A.: Individual readiness for change in the context of enterprise
resource planning system implementation. In: Proceedings of the 6th Conference of the Italian
Chapter for the Association for Information Systems, pp. 89–96 (2010)
26. Ng, C., Gable, G.: Maintaining ERP packaged software: a revelatory case study. J. Inf.
Technol. 25, 65–90 (2009)
27. Moon, J.Y., Sproull, L.S.: The role of feedback in managing the internet-based volunteer work
force. Inf. Syst. Res. 19, 494–515 (2008)
28. Yin, R.K.: Case Study Research: Design and Methods. Sage Publications, Thousand Oaks
(2009)
29. Pennarola, F., Caporarello, L.: Enhanced class replay: will this turn into better learning? In:
Wankel, C., Blessinger, P. (eds.) Increasing Student Engagement and Retention Using
Classroom Technologies: Classroom Response Systems and Mediated Discourse
Technologies, pp. 143–162. Emerald Group Publishing Limited, Bradford (2013)
30. Houghton, J.C.: Birth of a parent: the Wakeby distribution for modeling flood flows. Water
Resour. Res. 14, 1105–1109 (1978)