Professional Documents
Culture Documents
Dependability and Complexity: Exploring Ideas For Studying Open Systems
Dependability and Complexity: Exploring Ideas For Studying Open Systems
th
Date: 15 December 2000
Distribution: Unlimited
The role of the Joint Research Centre (JRC) of the EC is to provide scientific
support to the EU policy-making process by acting as a reference centre of
science and technology for the EU. This report has been prepared by the Joint
Research Centre in the frame of its institutional support programme to the EC
DG Information Society in the area of Dependability of IT systems. The
opinions and views expressed in this report do not represent the official
opinions and policies of the European Commission.
1
Department of Electrical and Computer Engineering, The George Washington University,
Washington, DC – USA (Kyriak@seas.gwu.edu). The work was performed while the author
was a Visiting Scientist at the Institute for Systems, Informatics and Safety of the JRC.
Page iii
TABLE OF CONTENTS
1 INTRODUCTION ..................................................................................................2
1.1 Study objectives..............................................................................................2
1.2 Study approach ...............................................................................................2
2 NATURE OF THE PROBLEM .............................................................................3
2.1 Scenarios ........................................................................................................3
2.2 Emerging issues ..............................................................................................6
3 THE CHALLENGE ...............................................................................................7
3.1 Definition of concepts .....................................................................................7
3.2 The challenge addressed in this study ..............................................................9
4 DEPENDABILITY CHARACTERISATION ..................................................... 10
4.1 Infrastructure model...................................................................................... 10
4.2 Data Transport Model................................................................................... 13
4.3 Dependability and Quality of Service............................................................. 14
4.4 The “cloud” is not a cloud............................................................................. 15
4.5 Issues with respect to the dependability of applications.................................. 16
4.6 Issues with respect to the dependability of the communications infrastructure 17
4.7 Observations regarding dependability characterisation ................................... 18
5 CHARACTERISATION OF THREATS AND VULNERABILITIES .............. 19
5.1 Threats, vulnerabilities and dependability....................................................... 19
5.2 Security and Complex Open Systems ............................................................ 21
5.3 Threats to the application.............................................................................. 23
5.4 Threats to the communications infrastructure ................................................ 24
5.5 Observations related to threats and vulnerabilities.......................................... 26
6 CONCLUSIONS ................................................................................................... 27
6.1 Findings on method....................................................................................... 27
6.2 Open issues................................................................................................... 28
6.3 Follow on ..................................................................................................... 30
7 ACKNOWLEDGMENTS .................................................................................... 31
8 REFERENCES ..................................................................................................... 31
Page 2
1 Introduction
1.1 Study objectives
The Internet has created a new environment for conducting human activities and
has given rise to the term Information Society. In this new environment, existing
economic and social activities are re-defined and new ones are born. At the same
time, concerns about the impact of the Internet on these activities cover a wide
spectrum of topics ranging from ethical to technical and scientific. Because activities
such as commerce, finance, energy, health, personal information exchange – all
increasingly depend on the Internet, one of the areas of interest is the ability of this
new medium to deliver services in a manner inspiring confidence to the users of
these services. The Internet is now a key component of a global information
infrastructure that consists of interconnected communications networks and the data
services provided by them. The global span of this infrastructure and the activities
which use or depend on it form a complex system that needs to be better
understood for confidence to be established. The need for confidence is pervasive
not only in a business context but also in all social interactions.
The Joint Research Centre (JRC) has undertaken this exploratory study upon initial
request of the EC DG Infso (Information Society). It aimed at fostering a better
understanding of the issues related to dependability and vulnerabilities of
information infrastructures, in particular complexity issues emanating from
interdependencies between infrastructures. The aims of the study include: a) to
establish a methodology for addressing issues arising from the complexity of the
problem, b) to identify technical issues, or problems that affect the establishment of
confidence and c) to provide a basis for new ideas for studying those systems and
evaluating their performance.
This study addresses some important dependability challenges with high societal
impact within the realm of European Commission policy making 2. It is anticipated
that a better characterisation of the challenges would stimulate and characterise
apposite follow up activities within appropriate fora. These activities may take the
form of knowledge exchange on international initiatives, on risks and concepts or
R&D projects that could be supported in the frame of EU research programmes.
1.2 Study approach
According to the initial plan [2], the study would be performed by the JRC in
collaboration with a number of additional organisations which would provide case
studies. Participating organisations would contribute to the study by providing sector
case studies covering innovative architectures combined with real or prospective
scenarios of large-scale unbounded systems. On the basis of preliminary contacts,
the following case studies were identified:
• Investigation of vulnerabilities in the electric power system
• The provision of public IP services
• A generic global monitoring system (e.g. early warning for emergency
management, radiation monitoring, non-proliferation of nuclear material) based
on a combination of private channels and open communications infrastructures
2
See also e-Europe initiative and action plan 2002 approved in June 2000 with specific action on
dependability of information infrastructures.
Page 3
instance Internet banking: The strategies of most large banks foresee Internet and
Web technologies as key to future retail service delivery in order to provide global
access to online banking services. This service will critically depend on the
availability and security of global IP based networks. The global public Internet
Protocol (IP) network or Internet is composed of a large number of interconnected
public IP networks. In order to support seamless end-to-end connectivity across the
Internet each individual IP network is a complex system which contains several tens
or hundreds of routers with many million end-users, through hundred thousands of
km of optical fibre, multiplexed circuits and copper pairs. IP network operation is
controlled by a complex, distributed naming and routing system, usually provided by
large telecommunications companies and Internet Service Providers (ISP).
Management of Internet traffic is usually co-ordinated between the major ISPs and
telecommunications operators so as to balance traffic routing in order to match
available network capacity.
Large scale IP networks are vulnerable to a wide range of physical threats to the
transmission and routing media which include power failures, accidental damage,
sabotage and other forms of intentional disruption. In addition to threats on the
physical network infrastructure, there are threats concerning the network routing
system itself, including software bugs, hardware malfunctions, deliberate hacking,
etc. A local routing malfunction or deliberate attack (e.g. denial of service attacks in
on ISP’s as experienced during 2000) can often rapidly impact on the whole
network, due to the high level of interconnection. Through cascading effects, such a
malfunction may result in a sequence of events leading to failures in other systems
and isolation of parts of the network due to the interaction of routing systems, which
could even result in a brown-out of large areas of the network and resulting
unavailability of the online banking services.
2.2 Emerging issues
What kind of findings can be drawn form the above scenarios? A number of IT
applications have assumed dominant positions and have become essential for the
conduct of human affairs. Also, as the example of the electric power grid illustrates,
some systems have become so large that they have a significant impact on large
segments of society. They have been characterised as “critical infrastructures”,
because they have been deemed indispensable for the human welfare. The term
was first introduced in the United States of America to identify specific national
infrastructures that are deemed critical for the national wellbeing [5]. These are
banking and finance, energy, information and communications, transportation, and
vital human services. In view of the preceding discussion, for most of these
infrastructures the term global rather than national would be more appropriate. The
term “critical” is assigned on the basis of a value judgement and has no direct
impact on the problem analysed in this report. The public will accept, use and rely
on the new applications only if it becomes convinced that it can rely upon them.
Similarly, the political authorities will be able to assure the public that they are
protecting the public interest only by ensuring the soundness of the critical
infrastructures. In both cases, the objective is to ensure the integrity of the
infrastructures, the applications or services in order to maintain confidence in these
systems.
For critical application domains such as the ones illustrated before, confidence may
be eroded either by the faulty delivery of desired and promised services, or by the
manifestation of unexpected and/or undesirable side effects. Applications relying on
the communications infrastructure for the transport of data may deliver faulty
services either due to faults within the application or due to faults in the data
Page 7
3 The challenge
3.1 Definition of concepts
In discussions involving complex issues or qualitative characteristics, one of the
problems that need to be addressed is that of understanding concepts. In the effort
to develop a methodology for addressing issues ranging from performance
specifications for physical communications networks to managing applications
based on the Internet, a consistent terminology becomes essential. For the
purposes of this report, the following concepts and terminology will be used.
User or customer: An entity, either a system or one or more human beings acting
as individuals or in some formal capacity, that requires and receives specified
services.
Voice communications and electronic mail are two examples of services received by
or provided to individuals. Voice communications, traditionally offered through the
Page 8
Public Switched Telephone Network (PSTN), have now begun to be offered also
through the Internet.
Layered service model: Driven by open standards and deregulation,
communications services are increasingly moving towards a layered model, Figure
1, consisting of interconnected communications networks with data services and
applications operating on top. New entrants and established industries compete for
the provision of data services and added value services (e.g. e-payments). The
layering of services implies that lower layers details are hidden from higher layers
and that the dependability (availability, security) of the upper-layer applications
relies in large part on the performance of lower layer services and networks [1].
Applications
Data services
Communications networks
qualitative. Conversely, the micro perspective views a complex system as the result
of synthesis based on interconnections of elementary components. The synthesis
approach imposes constraints on the interfaces and interactions among the
components. The term quality of service specifies quantitative performance
requirements. In this report, the terminology will be a combination of dependability
and quality of service.
Closed system: A system consisting of a known number of components or nodes,
their characteristics both physical and as data sources or sinks, their location and
their interconnections.
Open system: A system consisting of an unknown or partially known number of
nodes or their characteristics both physical and as data sources or sinks.
Connectivity is generally unknown or partially known.
In the discipline of computer communications, the term open system currently refers
to a set of interconnected computers from different vendors that communicate
among themselves using a standard communications protocol stack based on the
TCP/IP protocols. In this paper, the broader meaning of the term is used, unless
there is a need to refer specifically to the Open Systems Interconnections (OSI)
standard.
Dependability: Property of a system that indicates its ability to deliver specified
services to the user [7].
Dependability can be specified in terms of attributes which can be measurable or
qualitative. Attributes may be generic as well as application-specific. Traditionally,
the set of dependability attributes includes reliability, availability, safety and security.
Security, in turn, is usually subdivided into confidentiality, integrity, authentication,
non-repudiation. The set could also be amended with attributes that better
characterise performance requirements such as timeliness and quantity of data
transported per time unit.
Vulnerability: For the purposes of this paper, vulnerability of a system to a threat
can be understood as a weakness or flaw in the system that eliminates or reduces
its ability to deliver the specified services. A new type of vulnerability to be studied in
the context of critical infrastructures is related to interdependencies between
systems due to massive interconnections in systems-of-systems.
3.2 The challenge addressed in this study
From the perspective of the user, the desirable outcome is to receive services that
meet a set of expected performance requirements. From the perspective of a
service provider, or, equivalently, the designer of an application, the desirable
outcome is to offer services that meet specified performance requirements. From
either perspective, the application should be capable of offering a service in
accordance with desirable, expected, perceived, or specified performance
requirements. These services are to be delivered in a dependable manner by a
complex system that is beyond the capabilities of the user to comprehend and to
control. By formulating the problem in such general terms, one can ask the following
questions:
1. Are the expected services within the capabilities of an existing system?
2. If an existing system cannot deliver the expected services, how could a system
be designed to deliver these services in a cost-effective manner?
To answer these questions we will adopt the well-known procedures used in the
analysis and design of conventional systems to analyse the performance of complex
open systems. These are a) definition of the expected services, b) specification of
Page 10
4 Dependability characterisation
It was stated earlier that the main characteristics of this complex system that
consists of applications relying on a global communications infrastructure are its
global coverage, high connectivity, and data volume and speed. As a result, the
“Internet” has become a high level abstraction that provides little, if any, help in
analysing the performance of existing systems or designing new ones. Therefore,
we will start our analysis by defining a framework for decomposing the problem into
simpler problems some of which either have been solved, or are solvable by known
methods. For some problems new approaches or paradigms might be necessary.
4.1 Infrastructure model
In the introductory remarks, we used the examples of automation systems for
monitoring and control of interconnected electric power networks, distributed
information systems for the provision of remote health care services, monitoring
systems for verifying multilateral treaties and public IP services. These are few
examples of diverse applications that rely on the communications infrastructure for
the provision of the specialised services. The list can be expanded to cover the
multitude of applications that are not confined by geography. The diagram in Figure
2 shows an abstract framework of the dependence of the applications on the
communications infrastructure. Each application imposes its own performance
requirements on the communications infrastructure. The collection of these
requirements becomes the input for determining design parameters for the
infrastructure.
Page 11
Monitoring Systems
Electric Power Net
Financial Services
?
Voice over IP
Health Care
Communications
Requirements Specifications
Communications
Infrastructure
attributes are quantifiable and can be translated into the quality of service attributes
of communications systems. From the perspective of the designer of an application,
the dependability requirements for data transport is an output derived from the
dependability requirements of the application. In turn, this output becomes input for
specifying the dependability requirements of the communications infrastructure. The
open question for the Internet is whether the dependability requirements imposed
on it by a given application are feasible and at what cost.
In addition to the dependence upon the communications infrastructure, each
application depends upon additional systems in order to provide the required
services. Practically all applications rely on electric power systems for their energy
needs and on transportation systems for the delivery of products to mention two
other important infrastructures. Of course, the communications infrastructure itself
relies on the electric power infrastructure. Furthermore, the transportation
infrastructure relies on other infrastructures such as communications, power, etc.
This interdependence among infrastructures raises the question whether the
hierarchical decomposition is sufficient to describe the relationship between one or
more applications and the communications infrastructure, or some form of feedback
is necessary as shown in Figure 3.
Dependability
Dependability requirements
requirements
Dependability
Electric power
requirements
Application infrastructure
Communication
Dependability s infrastructure Dependability
requirements requirements
Dependability
Communications Automation
requirements
infrastructure system
(a) (b)
peer ISP level. The end-to-end performance requirements can be decomposed into
the following three levels:
• Level 1 – User to application
• Level 2 – Application to Internet Service Provider
• Level 3 – Among peer ISPs
If the Level 3 performance requirements could be specified, the problem of deriving
the design parameters of the physical network would be tractable. The preceding
discussion is illustrated in Figure 3.
For the model shown in Figure 4, the four dependability attributes of an application
can easily be translated into dependability attributes of the end-to-end data
transport service. In turn, these can be translated into dependability attributes of the
communications infrastructure.
User User
Level 1
Requirements
Application
Level 2
Requirements
Internetworking
Level 3
Requirements
Communications networks
Data transport
Network design specifications
have been developed in order to deliver quality of service to the IP layer. Among
them are RSVP signalling, differentiated services-per-hop queuing behavior and
Real Time Protocol (RTP) [8], [9] that provides timing and buffering for real-time IP
services such as voice and video. Although the intent of these protocols is to deliver
quality of service to the IP layer, they also dilute the simplicity of the original IP
protocol that led to the rapid expansion of the Internet. It would be useful to assess
their impact on and their implications for the future development of the Internet.
4.4 The “cloud” is not a cloud
It has become convenient to refer to the environment created by internetworking as
the “cloud”. Although this designation provides an easy vizualization of what
happens to the data of a user after they enter the server of the ISP, it is not very
helpful for the evaluation of the dependability of the data transport service. Each
ISP has control over and is responsible for the operation of a communications
network with known characteristics including topology and bandwidth. At the IP
level, these networks are connected by internetwork routers, or gateways, under the
control of peer ISPs.
U s e r ⇔ I S P ⇔ “ C loud ” ⇔ I S P ⇔ U s e r
ISP
Thus the topology of the internetwork is known, the gateways being the nodes and
the links those connecting peer ISPs. The concept of the “cloud” has arisen from the
fact that it is impossible to specify or determine the path of a packet through the
Internet, because the IP is a best-effort service, although the physical connectivity of
the Internet is deterministic as illustrated in Figure 5. The deterministic nature of the
physical topology provides a starting point for translating the dependability
requirements of an application into those of the major subsystems that form the
data transport path.
The interconnection between any two peer ISPs is through a relatively small number
of gateways. For any two peer ISPs, the performance requirements of the
corresponding interconnected gateways, are specified in service level agreements
(SLA). Thus, a mechanism exists for specifying quality of service requirements
among peer ISPs. In view of the preceding discussion about the nature of the IP,
the outstanding question is whether dependability requirements for data transport
can be specified and met at the interconnections among peer ISPs.
Page 16
q What are the feasible and cost-effective solutions for translating the end-
to-end data transport performance requirements into criteria for
interconnectivity among peer ISPs?
q What are the technical issues affecting the monitoring of performance
between peers ISPs with respect to the end-to-end data transport
dependability requirements?
The preceding analysis leads to the conclusion that the dependability requirements
of an application can be translated into dependability requirements for data
transport through the Internet. However, the attribute of timeliness could not be
satisfied with the basic IP. The new protocols attempt to solve this problem, but they
raise the question of trade-offs in the overall performance of the original Internet
based on the simplicity of the IP interface and a new Internet imposing complexity at
that interface.
from the nominal design value, when the channel noise power increases beyond
that predicted by the noise model used for designing the channel. In computer
systems, the concept of fault-tolerance is used to describe techniques for ensuring
that the effect of the faults on the system performance will be within specified
tolerances and leading to the design of fault-tolerant systems.
In designing systems to operate in the presence of disturbances, the sole concern
regarding the disturbances is how to describe their behaviour so as to eliminate their
impact on the operation of the system. Generally, the approach is
phenomenological. Models are developed from measurements. The models are
probabilistic, because the observations are statistical in nature. An underlying and
unspoken assumption is that, as long as a disturbance is present and measurable, it
can be modelled. The validity of the models depends on the completeness and
sufficiency of the measurements. Another perspective has been brought in when
man-made disturbances are generated intentionally to affect the operation of the
system. In some instances, the intentional disturbances have been labelled as
atttacks. To conform with the terminology adopted for this paper, the term threat is
used to describe both unintentional and intentional disturbances. Intentional threats
are posed by humans, but they may affect the system through another intermediary
system agent. In other words, the threat may appear to originate in a system agent
instead of one or more human beings. Sometime, the term malicious threat is used
to indicate intentional disturbances, the implication being that the intent is to do
harm. This designation is based on an unquantifiable value judgement regarding the
meaning of the word harm. On the other hand, the term intentional designates
disturbances introduced knowingly by humans regardless of motives. For this
reason, it is a more appropriate term for classifying man-made disturbances
generated intentionally.
5.2 Security and Complex Open Systems
The designation of intentional disturbances as threats, has given rise to the rapidly
expanding field of security. Historically, the purpose of security has been to
eliminate the intentional threat to the application by either isolating the application
from the external fault, or, if possible, eliminate the fault. To accomplish this
objective, traditional security approaches entail the construction of barriers that aim
to isolate the application from the intentional threats. The objects of security have
been distinct entities that are spatially bounded. Security has been provided by the
construction of barriers that separate the object of security from the intentional
threat. Moats have isolated castles, armour has protected humans, walls have
surrounded cities, and fences have enclosed territories. In the cases where the
object of security is not distinct and spatially bounded, e.g., transportation networks,
the meaning and effectiveness of providing security through barriers is not very
clear. It is less so, for complex spatially distributed systems such as the information
infrastructure. Therefore, a new frame of reference is needed for addressing the
issue of “security” in complex open systems.
An important reason for undertaking this study has been to investigate issues
related to electronic commerce. An underlying assumption is that the electronic
market place is open and accessible to those who desire to enter. Thus, the idea of
creating barriers to isolate open systems from their environment is contrary to the
concept of openness and accessibility. As a result, it becomes meaningless to talk
about providing security, in the classical meaning of the word, for the entire
information infrastructure by creating a barrier around it. The absence of barriers
implies that, the threats, unintentional and intentional, could affect any component
Page 22
of the infrastructure and that, the infrastructure should be able to deliver the
desirable services in a dependable manner in the presence of those threats.
If we consider the intentional threats as a subset of the external threats, we could
apply the traditional techniques of treating disturbances in the analysis and design
of complex systems. The problem of complexity is handled through system
decomposition and the impact of disturbances through modeling of the sources of
the disturbances. In this paper, we have decomposed the information infrastructure
into two subsystems: communications and applications. One could then talk about
the effects of disturbances on communications and on the applications. For the
intentional disturbances, the question of security has two parts, security of the
communications systems and security of the applications. Security for the
communications systems implies that dependability requirements for data transport
are satisfied in the presence of intentional external threats. Similarly, security of the
applications implies that the dependability requirements for the applications are
satisfied in the presence of external threats.
One category of intentional threats that would pose difficulties in modeling and
quantifying is that caused by persons who have legitimate access to a system,
either communications or applications. Intuitively, such threats may be considered
internal, because the human element is also a system component. One could then
treat such threats in terms of the probability of failure and proceed with the standard
techniques of systems analysis. It would be an understatement to say that modeling
human behavior could be a very controversial subject. It has been mentioned in this
paper only for the purpose identifying it as another factor affecting the dependability
of complex systems.
External disturbances may affect, independently, the communications system and
the applications. In addition, external disturbances may affect the applications
through the data transport services provided by the communications system. When
the topic of security for the Internet is discussed, it is, generally, implied that the
concern is about the security of the applications from threats propagated through
the data transport service. For the class of intentional external threats the security of
the application becomes a function of the security of the communications system.
The question then is, how to allocate the security effort between the
communications systems and the applications. At one extreme, the dependability
requirements of the applications could be based on the assumption that there is a
very high probability that threats would be transmitted to the application via the data
transport service. For such a scenario, the application would not rely on any
protection offered by the communications system and would have to take into
account the existence of these threats. At the other extreme, the dependability
requirements of the application could be based on the assumption that the
probability of external intentional disturbances being transmitted to the applications
through the communications system would be negligible. In that scenario, there
would not be any external intentional threats to the application and the dependability
requirements would produce a different design. The latter scenario would,
obviously, impose more severe security requirements on the communications
systems in order to satisfy the dependability requirements for data transport. In
either case, the probability of a threat to the application propagated through the
communications system would be one of the attributes of dependability of the data
transport service and a parameter of the quality of service requirements imposed on
the communications system. An optimal solution would require a trade off analysis
based on the statistical nature of these threats.
To provide security or/and implement a fault-tolerant design requires some
knowledge of the nature of the external threats. One may develop a catalog of all
Page 23
possible external threat to a system, but such a listing would not be very useful
unless it contained specific information about the likelihood of such threats
materializing. This observation also holds for the internal faults. To provide some
quantitative measures of security and fault-tolerance for a given application and
data transport service, the primary issues that need to be addressed are:
• How to identify the external threats for a given application
• How to develop models describing the statistical properties of these threats.
As in the case of data integrity, the timeliness requirement for data transport can be
specified by the quality of service agreement.
Unavailability of the data transport service is another threat to the application that
can have benign or malicious origins. Of course, it is easy to monitor for the
absence of data channels connected to the application. As in the case of timeliness,
the dependency on the data transport service is nearly absolute, unless provisions
have been made for alternate services. Of course, use of redundant data
transmission paths is a standard technique for reducing the probability of loss of the
path.
The quantity of transported data could pose a threat to the application by being
outside specified upper and lower bounds. The causes could either be malicious or
benign. Regardless of the cause, the threat can be countered by isolating the
application from the data that violate the attribute of quantity. As in the cases of
timeliness and availability, the threat could be detected but not countered.
The major issues concerning the threats to the dependability of the applications are
centered on technologies and procedures for monitoring the application for the
presence of threats originating in the data transport service. From a security
perspective, complete isolation of the application from the data transport service
implies the building of a barrier, such as end-to-end encryption, between the two.
would be an operational fault that could generate congestion and pose a threat to
the requirement of timeliness.
In this section, we have given some examples of the various types of threats that
could affect the four attributes of dependability as they apply to the transport of data
through the information infrastructure. These threats are either at the
communications networks or at the internetworking level and can be addressed
within each network management structure or through agreements among peer
ISPs.
6 Conclusions
6.1 Findings on method
In this paper we presented a method for studying complex systems and have
applied it to the characterisation of dependability for large-scale information
infrastructures employing open systems such as Internet as communications
medium. The methodology is based on the decomposition of infrastructures into two
major components, applications and communications infrastructure. The study
focused on two main issues at the interfaces between these components: a)
Characterisation of dependability in terms of performance attributes and
requirements and b) characterisation of threats and vulnerabilities.
The decomposition approach allows for the establishment of measurable attributes
of dependability for each of the two major components and for translating the
dependability attributes of the applications into those of the communications
infrastructure. The attributes of availability, integrity, quantity and timeliness have
been used in this study for demonstrating the approach. These attributes are
relevant for specifying the performance requirements of the data transport services
offered by the internetwork and for translating those requirements into quality of
service attributes for the communications systems forming the Internet.
The major conclusions and challenges derived from this approach are:
a) Quantifiable attributes and metrics can be specified for end-to-end dependability
requirements of applications relying on the Internet for data transport services.
b) For the component network, the end-to-end dependability requirements can be
mapped into network quality of service attributes that can be used to design the
network.
c) At the internetworking level, the end-to-end dependability attributes cannot be
satisfied, because the IP is inherently a best effort protocol. There is an open
question whether the current efforts to offer quality of service to the IP would
diminish the primary attraction of IP as an open system that offers easy access
to new entrants.
d) Given that it would be desirable to maintain IP as an open system, the end-to-
end dependability could be improved through new approaches to
internetworking architectures and protocols.
e) There is also a need to develop statistical models for better characterising data
traffic at the internetworking level.
The vulnerabilities of the applications to the threats propagated through the data
transport service offered by internetworking have been examined by partitioning the
threats into intentional and unintentional. For unintentional threats, the conventional
design philosophy has been to model the threats as external (independent)
variables and design a system to satisfy stated performance requirements in the
presence of such threats. This philosophy has been extended to the class of
intentional threats, so that, from a design perspective all threats are treated as
independent variables.
The major conclusions and challenges drawn from treating all threats as a single
category of independent variables are:
a) There is a need to develop models for identifying and quantifying intentional
threats. Within a wider risk management frame for addressing the problem, this
should also include models of vulnerabilities of the different layers of the
information infrastructure.
Page 28
Virtual Enterprise
Application 1 Application 2
Disruption 1 3
22 1 3 2 1
Disruption 3
6.3 Follow on
a) Presentation of the study on a workshop in February 2001 on interdependencies
between energy and telecoms sectors to validate the results of the study with
these industry sectors and subsequently to refine some of the identified issues.
b) Further elaborate within other industry forums and project consortia on open
issues related to infrastructure interdependencies. In particular, further elaborate
on the scope and nature of the problem by developing application scenarios that
exemplify the vulnerabilities related to these interdependencies.
c) Stimulate pan-EU and global research and discussions on dependability of
critical infrastructures. Apart from technology issues, this could also include
socio-economic issues related to risk acceptance.
d) Establish an electronic forum for ongoing discussions on the topic and
information exchange at international level. The dependability forum
(http://deppy.jrc.it) will be used for that purpose.
Page 31
7 Acknowledgments
We would like to thank representatives from organisations that contributed to this
study. In partciular:
Alberto Stefanini: CESI
Angelo Invernizzi: CESI
Oliver Botti: CESI
Marco Invernizzi: University of Genova, Department of Electrical Engineering
John Regnault: BT Applied Research and Technology
Alberto Sanna: Hospital San Raffaele - HSR
Yves Deswarte: LAAS-CNRS
Henri Lagneau: European Rail Research Institute
Marcelo Masera, Ioannis Drossinos, Denis Sarigiannis: JRC-ISIS
8 References
[1] EC IST programme consultation meeting on “Infrastructure Adaptability and
Survivability for Dependable and Reliable Services”. Report of the workshop held in
Brussels on 23 May 2000. Report available also on
http://www.cordis.lu/ist/fpd/wpconsult.htm
[2] “Dependability and Complexity: Exploring Ideas for studying large unbounded
systems”, Terms of Reference for an exploratory study, 20 December 1999.
[3] “Dependability and Complexity: Exploring ideas for studying large unbounded
systems”, Study Method, 18 January 2000.
[4] Nicholas Kyriakopoulos, Marc Wilikens, Dependability of complex open systems:
A unifying concept for understanding Internet-related issues. Third Informartion
Survivability Workshop, Boston Massachusetts USA 24-26 October 2000. Worksop
sponsored by the IEEE Computer Society and the US State Department and
organized by the CERT Coordination Center, Software Engineering Institute.
http://www.cert.org/research/isw.html
[5] Critical Infrastructure Protection PDD-63, Presidential Directive signed on May
22, 1998, on protecting the Nation's critical infrastructures from both physical and
cyber attack.
[6] Fred Halsall, Data Communications, Computer Networks and Open Systems,
Third Edition, Addison-Wesley, 1993
[7] Jean-Claude Laprie (ed.), Dependability: Basic Concepts and Terminology,
Springer-Verlag, Vienna, 1993
[8] M. Laubach, J. Halpern, “Classical IP and ARP over ATM”, Network Working
Group, Request for Comments: 2225, April 1998
[9] D. Grossman, T. Heinahen, “ Multiprotocol Encapsulation over ATM Adaptation
Layer 5”, Network Working Group, Request for Comments: 2684, September 1999
[10] ”Robert H. Anderson et al ,“ Securing the US Defense Information
Infrastructure: A proposed approach”, RAND. Published 1999 by RAND. ISBN 0-
8330-2713-1.
[11] Tu, “How robust is the Internet?”, Nature 406, 353-354, 2000.
Page 32