Download as pdf or txt
Download as pdf or txt
You are on page 1of 54

Capítulo

1
Applying Software-defined Networks to Cloud
Computing

Bruno Medeiros de Barros (USP), Marcos Antonio Simplicio Jr. (USP),


Tereza Cristina Melo de Brito Carvalho (USP), Marco Antonio Torrez Rojas
(USP), Fernando Frota Redígolo (USP), Ewerton Rodrigues Andrade (USP),
Dino Raffael Cristofoleti Magri (USP)

Abstract

Network virtualization and network management for cloud computing systems have be-
come quite active research areas in the last years. More recently, the advent of the
Software-Defined Networks (SDNs) introduced new concepts for tackling these issues,
fomenting new research initiatives oriented to the development and application of SDNs
in the cloud. The goal of this course is to analyze these opportunities, showing how
the SDN technology can be employed to develop, organize and virtualize cloud network-
ing. Besides discussing the theoretical aspects related to this integration, as well as the
ensuing benefits, we present a practical a case study based on the integration between
OpenDaylight SDN controller and OpenStack cloud operating system.

1.1. Introduction
The present section introduces the main topics of the course, providing an evolutionary
view of network virtualization in cloud computing and distributed systems. We present the
main changes occurred in the field in the latest years, focusing in the advent of Software-
defined Networks (SDN) and its implications in the current research scenario.

1.1.1. The Role Networking in Cloud Computing


Cloud computing has ushered the information technology (IT) field and service providers
into a new era, redefining how computational resources and services are delivered and
consumed. With cloud computing, distinct and distributed physical resources such as
computing power and storage space can be acquired and used in an on-demand ba-
sis, empowering applications with scalability and elasticity at low cost. This allows
the creation of different service models, generally classified as [Mell and Grance 2011]:
Infrastructure-as-a-Service (IaaS), which consists in providing only fundamental comput-
ing resources such as processing, storage and networks; Platform-as-a-Service (PaaS), in
which a development platform with the required tools (languages, libraries, etc.) is pro-
vided to tenants; and Software-as-a-Service (SaaS), in which the consumer simply uses
the applications running on the cloud infrastructure.
To actually provide cost reductions, the cloud needs to take advantage of
economies of scale, and one key technology for doing so is resource virtualization. After
all, virtualization allows creation of a logical abstraction layer above the pool of physi-
cal resources, thereby enabling a programmatic approach to allocate resource wherever
needed while hiding the complexities involved in their management. The result is poten-
tially very efficient resource utilization, better manageability, on-demand and program-
matic resource instantiation, and resource isolation for better control, accounting and
availability.
In any cloud environment, the network is a critical resource that connects various
distributed and virtualized components, such as servers, storage elements, appliances and
applications. For example, it is the network that allows aggregation of physical servers,
efficient virtual machine (VM) migration, and remote connection to storage systems, ef-
fectively creating the perception of large, monolithic resource pool. Furthermore, it is
also the network that enables delivery of cloud based applications to end users. Yet, while
every component in a cloud is getting virtualized, the physical network connecting these
components is not. Without virtualization, the network is one physical common network,
shared by all cloud end-users and cloud components. Without virtualization, the network
likely becomes a single complex system in the cloud as the cloud evolves to provide new
services with diverse requirements while trying to sustain the scale.

1.1.2. The Advent of Software-Defined Networks (SDNs)


The term SDN originally appeared in [Greene 2009], referring to the ability of Open-
Flow [McKeown et al. 2008] to support the configuration of table flows in routers and
switches using software. However, the ideas behind SDNs come from the goal of hav-
ing a programmable network, whose research started short after the emergence of the
Internet, led mainly by the telecom industry. Today, the networking industry has shown
enormous interest in the SDN paradigm, given the expectations of reducing both cap-
ital and operational costs with service providers and enterprise data centers with pro-
grammable, virtualizable and easily partitionable networks. Actually, programmability
is also becoming a strategic feature for network hardware vendors, since it allows a
many devices to be programmed and orchestrated in large network deployments (e.g.,
data centers). In addition, discussions related to the future Internet has led to the stan-
dardization of SDN-related application programming interfaces (API), with new com-
munication protocols being successfully deployed on experimentation and real scenarios
[Kim et al. 2013, Pan et al. 2011].
These features of SDNs make them highly valuable for cloud computing systems,
where the network infrastructure is shared by a number of independent entities and, thus,
network management becomes a challenge. Indeed, while the first wave of innovation
in the cloud focused on server virtualization technologies and on how to abstract com-
putational resources such as processor, memory and storage, SDNs are today promot-
ing a second wave with network virtualization [Lin et al. 2014]. The emergence of large
SDN controllers focused on ensuring availability and scalability of virtual networking for
cloud computing systems (e.g., OpenDayLight [Medved et al. 2014] and OpenContrail
[OpenContrail 2014]) is a clear indication of this synergy between both technologies.
Besides the cloud, SDNs have also been adopted in other computing scenarios,
with device vendors following the SDN path and implementing most of control logic
in software over standard processors. This has led to the emergence of software-defined
base stations, software defined optical switches, software-defined radios, software-defined
routers, among others.

1.2. Cloud Computing and Network Virtualization


This section aims to introduce the concepts and technologies related to network virtual-
ization in cloud computing systems. We start by describing the main virtualization tech-
nologies used to implement multitenant networks. Then, we present an architectural view
of virtual networks in the cloud, discussing the main components involved, their respon-
sibilities and existing interfaces. Finally, we focus on security, scalability and availability
aspects of the presented solutions.

1.2.1. Cloud Computing and Resource Virtualization


Virtualization is not a new concept in computing, having in fact appeared in the 70’s
[Jain and Paul 2013a, Menascé 2005]. The concept of virtualization has evolved with
time, however, going from virtual memory to processor virtualization (e.g., Hyper-V,
AMD-V, Hyper-threading) up to the virtualization of network resources (e.g., SDN, Open-
vSwitch, etc).
With the advent of cloud computing and the demand of virtualizing entire
computing environments, new virtualization techniques were developed, among them
[Amazon 2014]:
• Full Virtualization or Hardware VM: all hardware resources are simulated via
software. The hardware itself is not directly accessed by VMs, as the hypervisor translates
all interruptions and calls between the virtual and physical appliances. Obviously, this
technique incurs performance penalties due to I/O limitations, so it is less efficient than
its counterparts. However, it offers high flexibility, as systems running on VMs do not
need to be altered if there is a change on the underlying physical hardware;
• Para-Virtualization: the hardware is not simulated, but divided in different do-
mains so they can be accessed by VMs. Systems running on VMs need to be adapted
so that they can directly access the physical machine’s hardware resources. Performance
here is close to the performance on the physical machine (bare-metal), with the drawback
of limited flexibility, as hardware upgrades may demand changes on VMs.
• Para-virtualized drivers (Para+Full Virtualization): a combination of the pre-
vious techniques. As para-virtualized storage and networking devices have a much better
performance than their full-virtualized counterparts [Amazon 2014], this is the technique
applied to these devices, while full-virtualization (and the consequent flexibility brought
by it) is applied to devices whose performance is not critically affected. This approach
allows minimum changes when physical hardware upgrades are needed.
Several studies highlight the benefits of virtualization on a computing environ-
ment. Among them, the following can be cited [Menascé 2005, Kotsovinos 2010]:

• Resource sharing: when a device has more resources than what can be con-
sumed by a single entity, those resources can be shared among different users or processes
for better usage efficiency. For example, the different user applications or VMs running
on a server can share its multiple processors, storage disks or network links. If properly
executed, the economy achieved in small server consolidation onto VMs, for example,
can be from 29 % to 64 % [Menascé 2005];
• Resource aggregation: devices with a low availability of resources can be com-
bined to create a larger-capacity virtual resource. For example, with an adequate file
management system, small-size magnetic disks can be combined to create the impression
of a large virtual disk.
• Ease of management: one of the main advantages of virtualization is that it
facilitates maintenance of virtual hardware resources. One reason is that virtualization
usually provides standard software interfaces that abstract the underlying hardware (ex-
cept for para-virtualization). In addition, legacy applications placed in virtualized en-
vironments can keep running even after being migrated to a new infrastructure, as the
hypervisor becomes responsible to translate old instructions into those comprehensible
by the underlying physical hardware.
• Dynamics: with the constant changes to application requirements and work-
loads, rapid resource realocation or new resource provisioning becomes essential for ful-
filling these new demands. Virtualization is a powerful tool for this task, since virtual
resources can be easily expanded, realocated, moved or removed without concerns about
which physical resources will support the new demands. As an example, when a user
provisions a dynamic virtual disk, the underlying physical disk does not need to have that
capacity available at provisioning time: it just needs to be available when the user actually
needs to use it.
• Isolation: multiple users environments may contain users that do not trust on
each other. Therefore, it is essential that all users have their resources isolated from other
users, even if this is done logically (i.e., in software). When this happens, malicious users
are unable to monitor and/or interfere with other users’ activities, preventing a vulnera-
bility or attack to a given machine from affecting other users.

Despite their benefits, there are also disadvantages of virtualized environments,


such as [Kotsovinos 2010]:

• Performance: even though there is no single method for measuring perfor-


mance, it is intuitive that the extra software layer of the hypervisors leads to higher pro-
cessing costs than a comparable system with no virtualization.
• Management: virtual environments abstract physical resources in software and
files, so they need to be instantiated, monitored, configured and saved in an efficient and
auditable manner, which is not always an easy task.
• Security: whereas isolation is a mandatory requirement for VMs in many real
case scenarios, completely isolating a virtualized resource from another, or applications
running on the physical hardware from virtualized ones, are involved (if not impossible)
tasks. Therefore, it is hard to say whether or not a physical server hosting several virtual-
ized applications is monitoring them with the goal of obtaining confidential information,
or even whether a VM is somehow attacking or monitoring another VM.

1.2.2. Mechanisms for Network Virtualization


To understand the mechanisms that can implement network virtualization, first we need
to understand which resources can be virtualized in a network. In terms of resources,
networks are basically composed by network interface cards (NICs) connected to a layer
2 network through a switch. These layer 2 networks can be connected through routers to
form a layer 3 network, which in turn can be connected via routers to compose the Inter-
net. Each of these network components — NIC, L2 network, L2 switch, L3 networks, and
L3 routers — can be virtualized [Jain and Paul 2013b]. However, there are multiple, often
competing, mechanism for virtualizing these resources, as discussed in what follows:

• Virtualization of NICs: Every networked computer system is composed by at


least one NIC. In virtualized environments with multiple VMs, it becomes, thus, necessary
to provide every VM with its own virtual NIC (vNIC). This need is currently satisfied by
the hypervisor software, which is able to provide as many vNICs as the number of VMs
under its responsibility. The vNICs are connected to the physical NIC (pNIC) through
a virtual switch (vSwitch), just like physical NICs can be connected through a physical
switch to compose layer 2 networks. This NIC virtualization strategy has benefits such as
transparency and simplicity and, thus, is generally proposed by software vendors. Never-
theless, there is an alternative design proposed by pNIC (chip) vendors, which is to vir-
tualize NIC ports using single-route I/O virtualization (SR-IOV) [PCI-SIG 2010] on the
peripheral-component interconnect (PCI) bus. This approach directly connects the VMs
to the pNICs, potentially providing better performance (as it eliminates intermediary soft-
ware) and resource isolation (as the traffic does not go through a shared vSwitch). A third
design approach, promoted by physical switch vendors, is to provide virtual channels for
inter-VM communication using a virtual Ethernet port aggregator (VEPA) [IEEE 2012b],
which in turn passes VM frames to an external switch that implements inter-VM com-
munication. This approach not only frees up server resources, but also provides better
visibility and control over the traffic between any pair of VMs.
• Virtualization of L2 Switches: The number of ports in a typical switch is lim-
ited, and possibly lower than the number of physical machines that need to be connected
on an L2 network. Therefore, several layers of L2 switches need to be connected to ad-
dress network scalability requirements. To solve this issue, IEEE Bridge Port Extension
standard 802.1BR [IEEE 2012a] proposes a virtual bridge with a large number of ports
using physical or virtual port extenders (like a vSwitch).
• Virtualization of L2 Networks: In a multitenant data center, VMs in a sin-
gle physical machine may belong to different clients and, thus, need to be in different
virtual LANs (VLANs) [IEEE 2014]. VLANs implement package tagging for allowing
L2 devices to isolate clients’ traffic in different logical L2 networks, so different virtual
networks can use the same addressing space for different clients.
• Virtualization of L3 Networks: When the multitenant environment is extended
to a layer 3 network, there are a number of competing proposals to solve the problem. Ex-
amples include: virtual extensible LANs (VXLANs) [Mahalingam et al. 2014b]; network
virtualization using generic routing encapsulation (NVGRE) [Sridharan et al. 2011]; and
the Stateless Transport Tunneling (STT) protocol [Pan and Wu 2009].
• Virtualization of L3 Router: Multicore processors allow the design of network-
ing devices using software modules that run on standard processors. By combining
many different software-based functional modules, any networking device (L2 switch, L3
router, etc.) can be implemented in a cost-effective manner while providing acceptable
performance. Network Function Virtualization (NFV) [Carapinha and Jiménez 2009]
provides the conceptual framework for developing and deploying virtual L3 routers and
other layer 3 network resources.

1.2.3. Virtual Network Applications in Cloud Computing


As discussed in Section 1.1.1 the interest surrounding network virtualization has been fu-
eled by cloud computing and its isolation and scalability requirements. All the network
virtualization mechanisms presented in Section 1.2.2 can be applied to solve specific net-
work issues in cloud computing, in especial for the implementation of multitenant data
centers. Specifically, as depicted in Figure 1.1, a data center consist mainly of servers
in racks interconnected via a top-of-rack Ethernet switch, which in turn connects to an
aggregation switch, also known as an end-of-rack switch. The aggregation switches then
connect to each other, as well as the other servers in the data center. A core switch con-
nects the various aggregation switches and provides connectivity to the outside world,
typically through layer 3 networks. In multitenant data centers, client VMs are com-
monly placed in a different server, connected through the L2 network composed by this
switch-enabled infrastructure. The virtualization of L2 switches via mechanisms such as
VLAN enables the abstraction of tenant L2 networks on the distributed cloud data cen-
ter, allowing traffic isolation of tenant networks with a different logical addressing space.
Similarly, the virtualization of L3 routers using technologies such as VXLAN and GRE
tunneling enable the abstraction of layer 3 networks, connecting multiple data centers and
allowing tenant networks to be distributed in different sites.
Another inherent characteristic of multitenant data centers is the virtualization of
servers, enabling the instantiation of multiple VMs. VMs deployed in the same cloud
server commonly belong to different tenants and share the same computing resources,
including the network interface (NIC). Mechanisms to virtualize server NICs such as vir-
tual switches (i.e., in software) and SR-IOV (i.e., in hardware) are necessary to address
multi-tenancy. Besides virtual switches, other software-based virtualization mechanisms
are enabled by the NFV approach. NFV consists on the virtualization of network func-
tional classes, such as routers, firewalls, load balancers and WAN accelerators. These
appliances take the form software-based modules that can be deployed in one or more
VMs running on top of servers.
Figure 1.1. Cloud data center network.

1.2.4. Security, Scalability and Availability aspects


The widespread adoption of cloud computing has emerged important security concerns
inherited from multi-tenant environments. The need for isolating the server and network
resources consumed by different tenants is an example of the security requirements intro-
duced by cloud computing. Virtualization technologies such as the presented in section
1.2.2 play a crucial role in these scenario as mechanisms to enforce the resource isolation.
Considering the cloud networking scenario, virtualization technologies should provide se-
cure network domains for the cloud tenants, enabling secure connectivity for the services
running inside the cloud. Understand the nature of security threats is a fundamental com-
ponent of managing risks and implementing secure network services for cloud tenants.
According to [Barros et al. 2015], there are three threat scenarios in cloud computing net-
working. The scenarios are explained below.

• Tenant-to-Tenant: Threats related to attacks promoted by a legitimate tenant tar-


geting another legitimate tenant exploiting security vulnerabilities in the cloud networking
infrastructure.
• Tenant-to-Provider: Threats related to cloud vulnerabilities that allow a legit-
imate tenant to disrupt the operation of the cloud infrastructure, preventing the cloud
provider from delivering the service in accordance with the service level agreements
(SLAs) established with other legitimate tenants.
• Provider-to-Tenant: Threats related to vulnerabilities in the cloud provider in-
frastructure, which allows malicious insider attacks from employees and other agents with
direct access to the cloud infrastructure.

Different scenarios can also originate different groups of security threats in cloud
networking scenarios. Consequently, different groups of security solutions built upon net-
work virtualization mechanisms should be applied to ensure secure cloud services provi-
sion. Also according to [Barros et al. 2015], the sources of security threats concerned to
cloud networking scenario are described as follow.

• Physical isolation: Security threats originated from shared phyisical resources


in the underlying network infrastructure, such as server NICs, switches and routers. At-
tacks are commonly related to hijacking and analyzing tenant data from shared resources
or even causing resource exhaustion on shared physical network elements.
• Logical isolation: Security threats originated from shared virtual resources such
as virtual switches, Linux bridge and virtual routers. Security attacks commonly exploit
vulnerabilities in software-based virtualization mechanisms to access unauthorized data
and to reduce the quality of cloud network services.
• Authentication: Security threats originated from authentications vulnerabilities
related to inadequate authentication, which allow attackers to mask their real identities.
This can be accomplished by exploiting authentication protocols, acquiring credentials
and/or key materials by capturing data traffic, or via password recovery attacks (e.g.,
brute force or dictionary attacks).
• Authorization: Security threats originated by vulnerabilities related to autho-
rization problems, allowing granting or scaling rights, permissions or credentials to or
from an unauthorized user. For example, the attacker can exploit a vulnerability in the
cloud platform authorization modules, or even in the victim computer, to create or change
it’s credentials in order to obtain privileged rights.
• Insecure API: Security threats related to failures, malfunctions and vulnerabil-
ities in APIs that compose the cloud system. Attacks of this class try to exploit insecure
interfaces for accessing or tampering with services running in other tenants or cloud ad-
ministrative tools.

Following the principles of cloud computing, the cloud networking should be


highly scalable. Tha scalability of cloud networks are directly related to the features pro-
vided by the network virtualization mechanisms. The use of technologies such as VLAN
and SR-IOV have intrinsic scalability limitations related to the number of VMs hosted
in the same node. The capacity to replicate and migrate virtual domains in cloud com-
puting are fundamental keys to ensure the availability of cloud services. Redundant links
in the underlying infrastructure, as well as eliminating single point of failure in physical
and virtual network resources, are good practices for network availability in multi-tenant
environments.

1.3. Software-defined Networks (SDNs)


The current section introduces the concept of SDNs, as well as its importance in the net-
work virtualization scenario. Introducing the conceptual and practical division between
control plane and data plane, we explore the opportunities to apply SDN technologies
in different network architectures, focusing on the role of SDN control layer in network
virtualization deployments. We also present a reference architecture to implement virtual
networks in real scenarios. This section also present an evolutionary view of the SDN
controllers currently available in the market, aiming to support network professionals and
decision maker to adopt the right SDN approach in his deployment. We finish by focusing
on security, scalability and availability aspects of the presented solutions.

1.3.1. Creating Programmable Networks: a Historical Perspective


Recently, there has been considerable excitement surrounding the SDN concept, which
is explained by the emergence of new application areas such as network virtualization
and cloud networking. However, the basic ideas behind the SDN technology are actually
a result of more than 20 years of advances in the network field, in especial the interest
of turning computer networks into programmable systems. Aiming to give an overview
of this evolution, we can divide the historical advancements that culminated in the SDN
concept into the three different phases [Feamster et al. 2013], as follows:

1. Active Networks (from the mid-1990s to the early 2000s): This phase follows
the historical advent of the Internet, a period in which the demands for innovation in the
computer networks area were met mainly by the development and tests of new proto-
cols in laboratories with limited infrastructure and simulation tools. In this context, the
so-called “active networks” appeared as a first initiative aiming to turn network devices
(e.g., switches and routers) into programmable elements and, thus, allow furthers inno-
vations in the area. This programmability could then allow a separation between the two
main functionalities of networking elements: the control plane, which refers to the de-
vice’s ability to decide how each packet should be dealt with; and the data plane, which
is responsible for forwardind packets at high speed following the decisions made by the
control plane. Specifically, active networks introduced an new paradigm for dealing with
the network’s control plane, in which the resources (e.g., processing, storage, and packet
queues) provided by the network elements could be accessed through application pro-
gramming interfaces (APIs). As a result, anyone could develop new functionalities for
customizing the treatment given to the packets passing by each node composing the net-
work, promoting innovations in the networking area However, the criticism received due
to the potential complexity it would add to the Internet itself, allied to the fact that the
distributed nature of the Internet’s control plane was seen as a way to avoid single points
of failure, reduced the interest and diffusion of the active network concept in the industry.
2. Control- and data-plane separation (from around 2001 to 2007): After the In-
ternet became a much more mature technology in the late 1990’s, the continuous growth in
the volume of traffic turned the attention of the industry and academic communities to re-
quirements such as reliability, predictability and performance of computer networks. The
increasing complexity of network topologies, together with concerns regarding the perfor-
mance of backbone networks, led different hardware manufacturers to develop embedded
protocols for packet forwarding, promoting the high integration between the control and
data planes seen in today’s Internet. Nevertheless, network operators and Internet Service
Providers (ISPs) would still seek new management models to meet the needs from net-
work topologies ever larger and more complex. The importance of a centralized control
model has become more evident, as well as the need of a separation between the control
and data planes. Among the technological innovations arising from this phase, we can cite
the creation of open interfaces for communications between the control and data planes
such as ForCES (Forwarding and Control Element Separation)[Yang et al. 2004], whose
goal was to enable a locally centralized control over the hardware elements distributed
along the network topology [Caesar et al. 2005, Lakshman et al. 2004]. To ensure the ef-
ficiency of centralized control mechanisms, the consistent replication of the control logic
among the data plan elements would play a key role. The development of such distributed
state management techniques is also among the main technological contributions from
this phase. There was, however, considerable resistance from equipment suppliers to
implement open communication interfaces, which were seen as a factor that would facil-
itate the entry of new competitors in the network market. This ended up hindering the
widespread of the separation of data and control planes, limiting the number and variety
of applications developed for the control plane in spite of the possibility of doing so.
3. OpenFlow and Network Operating System (from 2007 to 2010): The
ever growing demand for open interfaces in the data plane led researchers to ex-
plore different clean slate architectures for logically centralized network control
[Casado et al. 2007, Greenberg et al. 2005, Chun et al. 2003]. In particular, the Ethane
project [Casado et al. 2007] created a centralized control solution for enterprise networks,
reducing switch control units to programmable flow-tables. The operational deployment
of Ethane in the Stanford computer science department, focusing on network experi-
mentation inside the campus, was indeed huge success, and resulted in the creation of
OpenFlow protocol [McKeown et al. 2008]. OpenFlow enables fully programmable net-
works by providing a standard data plane API for existing packet switching hardware.
The creation of the OpenFlow API, on its turn, allowed the emergence of SDN control
platforms such as NOX [Gude et al. 2008], thus enabling the creation of a wide range of
network applications. OpenFlow provided an unified abstraction of network devices and
its functions, defining forwarding behavior through traffic flows based on 13 different in-
structions. OpenFlow also led to the vision of a network operating system that, different
from the node-oriented system preconized by active networks, organize the network’s op-
eration into three layers: (1) a data plane with an open interface; (2) a state management
layer that is responsible for maintaining a consistent view of the overall network state; and
(3) control logic that performs various operations depending on its view of network state
[Koponen et al. 2010]. The need for integrating and orchestrating multiple controllers for
scalability, reliability and performance purposes also led to significant enhancements on
distributed state management techniques. Following these advances, solutions such as
Onix [Koponen et al. 2010] and its open-source counterpart, ONOS (Open Network Op-
erating System) [Berde et al. 2014], introduced the idea of a network information base
that consists of a representation of the network topology and other control state shared by
all controller replicas, while incorporating past work in distributed systems to satisfy state
consistency and durability requirements.

Analyzing this historical perspective and the needs recognized in each phase, it
becomes easier to see that the SDN concept emerged as a tool for allowing further net-
work innovation, helping researchers and network operators to solve longstanding prob-
lems in network management and also to provide new network services. SDN has been
successfully explored in many different research fields, including areas such as network
virtualization and cloud networking.

1.3.2. SDNs and the Future Internet


Today’s Internet was designed more than 30 years ago, with specific requirements to
connect, in a general and minimalist fashion, the (few) existing networks at the time.
After it was proven to be very successful at this task, the TCP/IP model became widely
adopted, in especial due to the possibility of running many distinct applications over its
infrastructure while keeping the core of the network as simple as possible. However,
the increase in the number of applications, users and devices making intense use of the
network resources would bring many (usually conflicting) requirements with each new
technology developed, turning the Internet into a completely different environment filled
with disputes regarding its evolution [Moreira et al. 2009].
While in the early days of the Internet the simplicity of the TCP/IP model was
considered one of its main strengths, enabling the rapid development of applications and
the growth of the network itself, it became a weakness because it would imply an unintel-
ligent network.

Figure 1.2. Ossification of the Internet

That is the main reason why TCP/IP’s simplicity is sometimes accused of being
responsible for the “ossification of the Internet” (See Figure 1.2): without the ability of
adding intelligence to the core of the network itself, many applications had to take cor-
rective actions on other layers; many patches would be sub-optimal, imposing certain
restrictions on the applications that could be deployed with the required levels of secu-
rity, performance, scalability, mobility, maintainability, etc. Therefore, even though the
TCP/IP model displays a reasonably good level of efficiency and is able to meet many of
the original requirements of the Internet, many believe it may not be the best solution for
the future [Alkmim et al. 2011].
Many of the factors pointed out as the cause of the Internet’s ossification are re-
lated to the strong coupling between the control and data planes, so the decision on how
to treat the data flow and the execution of this decision are both handled by the same
device. In such environment, new network applications or features have to be deployed
directly into the network infrastructure, a cumbersome task given the lack of standard in-
terfaces for doing so in a market dominated by proprietary solutions. Actually, even when
a vendor does provide interfaces for setting and implementing policies into the network
infrastructure, the presence of heterogeneous devices with incompatible interfaces ends
up hindering such seemingly trivial tasks.
This ossification issue has led to the creation of dedicated appliances for tasks
seem as essential for the correct network’s operation, such as firewalls, intrusion detection
systems (IDS), network address translators (NAT), among others [Moreira et al. 2009].
Since such solutions are many times seen as palliative, studies aimed at changing this
ossification state became more prominent, focusing especially in two approaches. The
first, more radical, involved the proposal of a completely new architecture that could
replace the current Internet model, based on past experiences and identified limitations.
This “clean state” strategy has not received much support, however, not only to the high
costs involved in its deployment, but also because it is quite possible that, after years of
effort to build such specification, it might become outdated after a few decades due to the
appearance of new applications with unanticipated requirements. The second approach
suggests evolving the current architecture without losing compatibility with current and
future devices, thus involving lower costs. By separating the data and control planes, thus
adding flexibility to how the network is operated, the SDN paradigm gives support to this
second strategy [Feamster et al. 2014].
According to [Open Networking Foundation 2012], the formal definition of an
SDN is: “an emerging architecture that is dynamic, manageable, cost-effective, and adapt-
able, making it ideal for the high-bandwidth, dynamic nature of today’s applications. This
architecture decouples the network control and forwarding functions enabling the net-
work control to become directly programmable and the underlying infrastructure to be
abstracted for applications and network services.” This definition is quite comprehensive,
making it clear that the main advantage of the SDN paradigm is to allow different policies
to be dynamically applied to the network by means of a logically centralized controller,
which has a global view of the network and, thus, can quickly adapt the network con-
figuration in response to changes [Kim and Feamster 2013]. At the same time, it enables
independent innovations in the now decoupled control and data planes, besides facilitat-
ing the network state visualization and the consolidation of several dedicated network
appliances into a single software implementation [Kreutz et al. 2014]. This flexibility is
probably among the main reasons why companies from different segments (e.g., device
manufacturers, cloud computing providers, among others) are increasingly adopting the
SDN paradigm as the main tool for managing their resources in an efficient and cost-
effective manner [Kreutz et al. 2014].

1.3.3. Data and Control Planes


Given that the separation between data and control planes is at the core of the SDN tech-
nology, it is important to discuss them in some detail. Figure 1.3 shows a simplified
SDN architecture and its main components, showing that the data and control planes are
connected via a well-defined programming interface between the switches and the SDN
controller.
Figure 1.3. SDN architecture overview

The data plane corresponds to the switching circuitry that interconnects all de-
vices composing the network infrastructure, together with a set of rules that define which
actions should be taken as soon as a packet arrives at one of the device’s ports. Examples
of common actions are to forward the packet to another port, rewrite (part of) its header,
or even to discard the packet.
The control plane, on its turn, is responsible for programming and managing
the data plane, controlling how the routing logic should work. This is done by one or
more software controllers, whose main task is is to set the routing rules to be followed
by each forwarding device through standardized interfaces, called the southbound inter-
faces. These interfaces can be implemented using protocols such as OpenFlow 1.0 and
1.3 [OpenFlow 2009, OpenFlow 2012], OVSDB [Pfaff and Davie 2013] and NETCONF
[Enns et al. 2011] The control plane concentrates, thus, the intelligence of the network,
using information provided by the forwarding elements (e.g., traffic statistics and packet
headers) to decide which actions should be taken by them [Kreutz et al. 2014].
Finally, developers can take advantage of the protocols provided by the control
plane through the northbound interfaces, which abstracts the low-level operations for con-
trolling the hardware devices similarly to what is done by operating systems in computing
devices such as desktops. These interfaces can be provided by remote procedure calls
(RPC), restful services and other cross-application interface models. This greatly facili-
tates the construction of different network applications that, by interacting with the control
plane, can control and monitor the underlying network. This allows them to customize the
behavior of the forwarding elements, defining policies for implementing functions such
as firewalls, load balancers, intrusion detection, among others.
1.3.4. The OpenFlow Protocol
The OpenFlows protocol is one of the most commonly used southbound interfaces, being
widely supported both in software and hardware, and standardized by the Open Network-
ing Foundation (ONF). It works with the concept of flows, defined as groups of packets
matching a specific (albeit non-standard) header [McKeown et al. 2008], which receive
may be treated differently depending how the network is programmed. OpenFlow’s sim-
plicity and flexibility, allied to the high performance at low cost, ability to isolate experi-
mental traffic from production traffic, and to cope with vendors’ need for closed platforms
[McKeown et al. 2008], are probably among the main reasons for this success.
Whereas other SDN approaches take into account other network elements, such
as routers, OpenFlow focus mainly on switches [Braun and Menth 2014]. Its architecture
comprises, then, three main concepts [Braun and Menth 2014]: (1) the network’s data
plane is composed by OpenFlow-compliant switches; (2) the control plane consists of
one or more controllers using the OpenFlow protocol; (3) the connection between the
switches and the control plane is made through a secure channel.
An OpenFlow switch is basically a forwarding device endowed with a Flow Table,
whose entries define the packet forwarding rules to be enforced by the device. To accom-
plish this goal, each entry of the table comprises three elements [McKeown et al. 2008]:
match fields, counters, and actions. The match fields refer to pieces of information that
identify the input packets, such as fields of its header or its ingress port. The counters, on
their turn, are reserved for collecting statistics about the corresponding flow. They can,
for example, be used for keeping track of the number of packets/bytes matching that flow,
or of the time since the last packet belonging to that flow was seen (so inactive flows can
be easily identified) [Braun and Menth 2014]. Finally, the actions specify how the pack-
ets from the flow must be processed, the most basic options being: (1) forward the packet
to a given port, so it can be routed to through the network; (2) encapsulate the packet and
deliver it to a controller so the latter can decide how it should be dealt with (in this case,
the communication is done through the secure channel); or (3) drop the packet (e.g., for
security reasons).
There are two models for the implementation of an OpenFlow switch
[McKeown et al. 2008]. The first, consists in a dedicated OpenFlow switch, which is
basically a “dumb” device that only forwards packets according to the rules defined by a
remote controller.
In this case (See Figure 1.4), the flows can be broadly defined by the applications,
so the network capabilities are only limited by how the Flow Table is implemented and
which actions are available. The second, which may be preferable for legacy reasons, is a
classic switch that supports OpenFlow but also keeps its ability to make its own forward-
ing decisions. In such hybrid scenario, it is more complicated to provide a clear isolation
between OpenFlow and “classical” traffic. To be able to do so, there are basically two
alternatives: (1) to implement one extra action to the OpenFlow Table, which forwards
packets to the switches normal processing pipeline, or (2) to define different VLANs for
each type of traffic.
Whichever the case, the behaviors of the switch’s OpenFlow-enabled portion may
Figure 1.4. OpenFlow switch proposed by [McKeown et al. 2008].

be either reactive or proactive. In the reactive mode, whenever a packet arrives at the
switch, it tries to find an entry in its Flow Table matching that packet. If such an entry
is found, the corresponding action is executed; otherwise, the flow is redirected to the
controller, which will insert a new entry into the switch’s Flow Table for handling the
flow and only then the packet is forwarded according to this new rule. In the proactive
mode, on the other hand, the switch’s Flow Table is pre-configured and, if an arriving flow
does not math any of the existing rules, the corresponding packets are simply discarded
[Hu et al. 2014a].
Operating in the proactive mode may lead to the need of installing a large number
of rules beforehand on the switches, one advantage over the reactive mode is that in
this case the flow is not delayed by the controller’s flow configuration process. Another
relevant aspect is that, if the switch is unable to communication with the controller in the
reactive mode, then the switch’s operation will remain limited to the existing rules, which
may not be enough for dealing with all flows. In comparison, if the network is designed
to work in the proactive mode from the beginning, it is more likely that all flows will be
handled by the rules already installed on the switches.
As a last remark, it is interesting to notice that implementing the controller as a
centralized entity can provide a global and unique view of the network to all applications,
potentially simplifying the management of rules and policies inside the network. How-
ever, as any physically centralized server, it also becomes a single point of failure, po-
tentially impairing the network’s availability and scalability. This issue can be solved by
implementing a a physically distributed controller, so if one controller is compromised,
only the switches under its responsibility are affected. In this case, however, it would
be necessary to implement synchronization protocols for allowing a unique view of the
whole network and avoid inconsistencies. Therefore, to take full advantage of the benefits
from a distributed architecture, such protocols must be efficient enough not to impact the
overall network’s performance.
1.3.5. SDN Controllers
An SDN controller, also called a network operating system, is a software platform where
all the network control applications are deployed. SDN controllers commonly contain
a set of modules that provide different network services for the deployed applications,
including routing, multicasting, security, access control, bandwidth management, traffic
engineering, quality of service, processor and storage optimization, energy usage, and
all forms of policy management, tailored to meet business objectives. The network ser-
vices provided by the SDN controller consist of network applications running upon the
controller platform, and can be classified as follows:

• Basic Network Service: Basic network applications that implement protocol,


topology and device essential functions. Examples of basic network services are topology
management, ARP handling, host tracking, status management and device monitoring.
Basic network services are commonly used by other network services deployed in the
controller platform to implement more complex control functionalities.
• Management Services: Management network applications that make use of ba-
sic functions to implement business-centric management functionalities. Examples of
management services are authentication and authorization services, virtual tenant network
coordination, network bandwidth slicing and network policy management.
• Core Services: Core network applications oriented to manage and orchestrate
the operation of the control platform, including managing communication between other
network services and shared data resources. Examples of core services are messaging,
control database managing and service registering.
• Custom Application Services: Custom network applications consist of any ap-
plication developed by the platform users. The applications commonly use other network
services deployed in the same SDN control platform to implement different network so-
lutions. Examples of custom application services oriented toward security are DDoS
prevention, load balancing and firewalling. Custom application services can also target
areas such as QoS implementation, enforcement of policies and integration with cloud
computing orchestration systems.

Open source controllers have been an important vector of innovation in the SDN
field. The dynamics of the open source community led the development of lots of SDN
projets, including software-based switches and SDN controllers [Casado 2015]. To eval-
uate and compare different open-source controller solutions and their suitability to each
deployment scenario, one can employ the following metrics:

• Programming Language: The programming language used to build the con-


troller platform. The controller language will also dictate the programming language
used to develop the network services, and can directly influence other metrics such as
performance and learning curve. Moreover, some operating systems may not provide full
support for all programming languages.
• Performance: The performance of the controller can be determinant when
choosing the correct platform for production purposes. The performance of an SDN con-
troller can be influenced by many factors, including the programming language, design
patterns adopted and hardware compatibility.
• Learning Curve: The learning curve of the control platform is a fundamental
metric to consider when starting a project. It measures the experience necessary to learn
the SDN controller platform and build the necessary skills. The learning curve directly
influences the time to develop a project and also the availability of skilled developers.
• Features: The set of network functions provided by the SDN controller. In ad-
diction to basic network services, control platforms can also provide specialized services
related to controlling and managing network infrastructures. Two important groups of
features are the set of protocols supported in the southbound API of the controller (e.g.,
OpenFlow, OVSDB, NETCONF), which will determine the supported devices in the un-
derlying network infrastructure, and the support for integration with cloud computing
systems.
• Community Support: The support provided by the open source community is
essential to measure how easy it would be to solve development and operating questions,
as well as the frequency in which new features are released. Some open source SDN
projects are also supported or maintained by private companies, which is likely to accel-
erate releases and lead to better support for specific business demands.

To give a concrete example on the usefulness of these metrics, we can


apply them to some of the most popular open source SDN controller projects,
namely: NOX [Gude et al. 2008, NOXRepo.org 2015], POX [NOXRepo.org 2015],
Ryu [Ryu 2015], Floodlight [Floodlight 2015] and OpenDaylight [Medved et al. 2014,
Linux Foundation 2015].
NOX Controller: The NOX controller is part of the first generation of OpenFlow
controllers, being developed by Nicira Networks side-by-side with the OpenFlow proto-
col. As the oldest OpenFlow controller, it is considered very stable by the industry and the
open source community, and is largely deployed in production and educational environ-
ments. The NOX controller has two versions. The first, NOX-Classic, was implemented
in C++ and Phyton, and supports the development of network control application using
both languages. This cross-language design was later proved to be less efficient than de-
signs based on a single language, since it ended up leading to some inconsistency in terms
of features and interfaces. Possibly due to these issues, NOX-Classic is no longer sup-
ported, being superseded by the second version, called simply NOX or “new NOX”. This
second version of NOX was implemented using the C++ programming language, sup-
porting network application services developed with the same language using an event-
oriented programming model. The code NOX was also reorganized to provide better
performance and programmability compared with NOX-Classic, introducing support for
both 1.0 and 1.3 versions of the OpenFlow protocol. A modern NOX SDN controller is
recommended when: users know the C++ programming language; users are willing to use
low-level facilities and semantics of the OpenFlow protocol; users need production level
performance.
POX Controller: POX is a Python implementation of the NOX controller, being
created to be a platform for rapid development and prototyping of network control soft-
ware. Taking advantage of Python’s flexibility, POX has been used as basis for many SDN
projects, being applied for prototyping and debugging SDN applications, implementing
network virtualization and designing new control and programming models. The POX
controller has also official support from the NOX community. POX support the version
1.0 only of the OpenFlow protocol and provides better performance when compared with
Python applications deployed on NOX-Classic. However, since Python is an interpreted
instead of compiled language, POX does not provide production level performance as
NOX controller do. Therefore, a POX SDN controller is recommended when: users know
the Python programming language; users are not much concerned with the controller’s
performance; users need a rapid SDN platform for prototyping and experimentation, e.g.,
for research, experimentation, or demonstrations purposes; users are looking for an easy
way to learn about SDN control platforms (e.g., for educational purposes).
Ryu Framework: Ryu is a Python component-based SDN framework that pro-
vides a large set of network services through a well-defined API, making it easy for devel-
opers to create new network management and control applications for multiple network
devices. Differently from NOX and POX SDN controllers, which support only Open-
Flow protocols in their southbound API, Ryu supports various protocols for managing
network devices, such as OpenFlow (versions 1.0 and 1.2 – 1.4), Netconf and OF-config.
Another important feature of the Ryu framework is the integration with the OpenStack
cloud orchestration system [OpenStack 2015], enabling large deployments on cloud data
centers. Even though Ryu was implemented using Python, its the learning curve is mod-
erated, since it provides a large set of service components and interfaces that need to
be understood before it can be integrated into new applications. As a result, the Ryu
SDN framework is recommended when: users know the Python programming language;
users are not much concerned with the controller’s performance; the control applications
require versions 1.3 or 1.4 of the OpenFlow protocol or some of the other supported pro-
tocols; users intend to deploy the SDN controller on a cloud data center that makes use of
OpenStack’s orchestration system.
Floodlight Controller: The Floodlight Open SDN Controller is a Java-based
OpenFlow Controller supported by an open source community of developers that includes
a number of engineers from Big Switch Networks. Floodlight is the core of Big Switch
Networks commercial SDN products and is actively tested and improved by the industry
and the developers community. Floodlight was created as a fork from the Beacon Java
OpenFlow controller [Erickson 2013], the first Java-based controller to implement full
multithread and runtime modularity features. Even though it has a quite extensive docu-
mentation and counts with official support from both the industry and open source com-
munity, Floodlight has a steep learning curve due to the large set of features implemented.
Among those features, we can cite the ability to integrate with OpenStack orchestration
system and the use of RESTful interfaces [Richardson and Ruby 2008] in the northbound
API, enabling easy integration with external business applications. Floodlight controller
is recommended when: users know the Java programming language; users need pro-
duction level performance and would like to have industry support; applications should
interact with the SDN controller through a RESTful API; users intend to deploy the SDN
controller on a cloud data center that makes use of OpenStack’s orchestration system.
OpenDaylight Controller: OpenDaylight is a Java-based SDN controller built to
provide a comprehensive network programmability platform for SDN. It was created as
a Linux Foundation collaborative project in 2013 and intends to build a comprehensive
framework for innovation in SDN environment. OpenDaylight project is supported by a
consortium of network companies such as Cisco, Ericsson, IBM, Brocade and VMware,
besides the open source community and industry that collaborate in the project. Open-
Dayligh is also based on the Beacon OpenFlow controller and provides production level
performance with support for different southbound protocols, such as OpenFlow 1.0 and
1.3, OVSDB and NETCONF. It also provides integration with OpenStack’s cloud or-
chestration system. The OpenDaylight controller proposes an architectural framework by
clearly defining the southbound and northbound APIs and how they interact with external
business applications and internal network services. A a drawback, OpenDaylight has a
steep learning curve due to its architectural complexity and the large set of services em-
bedded in the controller. It is, nevertheless, recommended when: users know the Java
programming language; users need production level performance and would like to have
industry support; users intend to deploy the SDN controller on a cloud data center that
makes use of OpenStack’s orchestration system; target applications require modularity
through an architectural design; applications need to integrate with third party business
applications, as well as with heterogeneous underlying network infrastructures.
Table 1.1 presents a summary of the main characteristics of the described open
source SDN controllers, based on the metrics hereby discussed.

Table 1.1. Summary of the main characteristics of open source SDN controllers
NOX POX Ryu Floodlight ODL
Language C++ Python Python Java Java
Performance High Low Low High High
Distributed No No Yes Yes Yes
OpenFlow 1.0 1.0 1.0, 1.2–1.4 1.0, 1.3 1.0, 1.3
Multi-tenant clouds No No Yes Yes Yes
Learning curves Moderate Easy Moderate Steep Steep

1.3.6. Network Virtualization using SDNs


Even though network virtualization and SDN are independent concepts, the relationship
between these two technologies has become much closer in recent years. Network virtual-
ization creates the abstraction of a network that is decoupled from the underlying physical
equipment, allowing multiple virtual networks to run over a shared infrastructure with a
topology that differs from the actual underlying physical network.
Even though network virtualization has gained prominence as a use case for SDN,
the concept has in fact evolved in parallel with programmable networking. In especial,
both technologies are tightly coupled by the programmable networks paradigm, which
presumes mechanisms for sharing the infrastructure (across multiple tenants in a data
center, administrative groups in a campus, or experiments in an experimental facility) and
supporting logical network topologies that differ from the physical network. In what fol-
lows, we provide an overview of the state of the art on network virtualization technologies
before and after the advent of SDN.
The creation of virtual networks in the form of VLANs and virtual private net-
works has been supported by multiple network equipment vendors for many years. These
virtual networks could only be created by network administrators and were limited to run
the existing protocols, delaying the deployment of new network technologies. As an alter-
nate, researchers started building overlay networks by means of tunneling, forming their
own topology on top of a legacy network to be able to run their own control-plane proto-
cols. In addition to the significant success of peer-to-peer applications, built upon overlay
networks, the networking community reignited research on overlay networks as a way of
improving the network infrastructure. Consequently, virtualized experimental infrastruc-
tures such as PlanetLab [Chun et al. 2003] were built to allow multiple researchers to run
their own overlay networks over a shared and distributed collection of hosts. The success
of PlanetLab and other shared experimental network platforms motivated investigations
on the creation of virtual topologies that could run custom protocols inside the underlying
network [Bavier et al. 2006], thus enabling realistic experiments to run side by side with
production traffic. As an evolution of these experimental infrastructures, the GENI project
[Berman et al. 2014] took the idea of a virtualized and programmable network infrastruc-
ture to a much larger scale, building a national experimental infrastructure for research
in networking and distributed systems. These technologies ended up by leading some to
argue that network virtualization should be the basis of a future Internet, allowing mul-
tiple network architectures to coexist and evolve over time to meet needs in continuous
evolution [Feamster et al. 2007, Anderson et al. 2005, Turner and Taylor 2005].
Researches on network virtualization evolved independently of the SDN con-
cept. Indeed, the abstraction of the physical network in a logical network does not re-
quire any SDN technology, neither does the separation of a logically centralized con-
trol plane from the underlying data plane imply some kind of network virtualization.
However, a symbiosis between both technologies has emerged, which has begun to cat-
alyze several new research areas, since SDN can be seen as an enabling technology for
network virtualization. Cloud computing, for example, introduced the need for allow-
ing multiple customers (or tenants) to share a same network infrastructure, leading to
the use of overlay networks implemented through software switches (e.g., Open vSwitch
[Open vSwitch 2015, Pfaff et al. 2009]) that would encapsulate traffic destined for VMs
running on other servers. It became natural, thus, to consider using logically centralized
SDN controllers to configure these virtual switches with the rules required to control how
packets are encapsulated, as well as to update these rules when VMs move to new physical
locations.
Network virtualization, on its turn, can be used for evaluating and testing SDN
control applications. Mininet [Handigol et al. 2012a, Lantz et al. 2010], for example,
uses process-based network virtualization to emulate a network with hundreds of hosts,
virtual switches and SDN controllers on a single machine. This environment enables re-
searchers and network operator to develop control logic applications and easily evaluate,
test and debug them on a full-scale emulation of the production data plane, accelerat-
ing the deployment on the real production networks. Another contribution from network
virtualization to the development of SDN technologies is the ability to slice the underly-
ing network, allowing it to run simultaneous and isolated SDN experiments. This con-
cept of network slicing, originally introduced by the PlanetLab project [Chun et al. 2003],
consists in separate the traffic-flow space into different slices, so each slice has a share
of network resources and can be managed by a different SDN controller. FlowVisor
[Sherwood et al. 2010], for example, provides a network slicing system that enables
building testbeds on top of the same physical equipment that carries the production traf-
fic.

1.3.7. SDN Applications in Network Virtualization


SDN facilitates network virtualization and may, thus, makes it easier to implement fea-
tures such as dynamic network reconfiguration (e.g., in multitenant environments). How-
ever, it is important to recognize that the basic capabilities of SDN technologies do not
directly provide these benefits. Some SDN features and their main contributions to im-
prove network virtualization are:

• Control plane and data plane separation: The separation between control and
data planes in SDN architectures, as well as the standardization of interfaces for the com-
munication between those layers, allowed to conceptually unify different vendor network
devices under the same control mechanisms. For network virtualization purposes, the
abstraction provided by the control plane and data plane separation facilitates deploying,
configuring, and updating devices across virtualized network infrastructures. The control
plane separation also introduces the idea of network operating systems, which consists
of a scalable and programmable platform for managing and orchestrating virtualized net-
works.
• Network programmability: Programmability of network devices is one of the
main contributions from SDN to network virtualization. Before the advent of SDN, net-
work virtualization was limited to the static implementation of overlay technologies (such
as VLAN), a task delegated to network administrators and logically distributed among
the physical infrastructure. The programming capabilities introduced by SDN provide the
dynamics necessary to rapidly scale, maintain and configure new virtual networks. More-
over, network programmability also allows the creation of custom network applications
oriented to innovative network virtualization solutions.
• Logically centralized control: The abstraction of data plane devices provided by
SDN architecture gives the network operating system, also known as SDN orchestration
system, a unified view of the network. Therefore, it allows custom control applications to
access the entire network topology from a logically centralized control platform, enabling
the centralization of configurations and policy management. This way, the deployment
and management of network virtualization technologies becomes easier than in early dis-
tributed approaches.
• Automated management: the SDN architecture enhances network virtualization
platforms by providing support for automation of administrative tasks. The centralized
control and the programming capabilities provided by SDN allow the development of
customized network applications for virtual network creation and management. Auto-
scaling, traffic control and QoS are examples of automation tools that can be applied to
virtual network environments.
Among the variety of scenarios where SDN can improve network virtualization
implementations, we can mention campus network testbeds [Berman et al. 2014], enter-
prise networks [Casado et al. 2007], multitenant data centers [Koponen et al. 2014] and
cloud networking [Jain and Paul 2013b]. Despite this successful application of SDN
technologies in such network virtualization use cases scenarios, however, much work is
needed both to improve the existing network infrastructure and to explore SDN’s poten-
tial for solving problems in network virtualization. Examples include SDN applications to
scenarios such as home networks, enterprise networks, Internet exchange points, cellular
networks, Wi-Fi radio access networks, and joint management of end-host applications.

1.3.8. Security, Scalability and Availability aspects


Since the SDN concept became a prominent research topic in the area of computer net-
works, many studies have discussed fundamental aspects such as its scalability, availabil-
ity and, in especial, security.
Even though scalability issues apply both to the controller and to forwarding
nodes, the latter are not specifically affected by the SDN technology and, thus, we only
focus on the former. Specifically, there are three main challenges for attaining controller
scalability [Yeganeh et al. 2013, Sezer et al. 2013], both of which originate in the fact
the network’s intelligence is moved from the distributed forwarding nodes to the control
plane: (1) the latency incurred by the communications between the forwarding nodes and
the controller(s); (2) the size of the controller’s flow database, and (3) the communication
between controllers in a physically distributed control plane architecture. As previously
mentioned in Section 1.3.4, the first challenge may be tackled with proactive approach,
i.e., by installing most flow rules on the SDN-enable switches so they do not need to con-
tact the controllers too frequently. Even though this might sacrifice flexibility, this may
be inevitable especially for large flows.
Another strategy for tackling latency issues in the control plane, as well as the size
of the flow databases, consists in using multiple controllers. As a result, they can share the
communication burden, reducing delays potentially caused by queuing requests coming
from switches, and also the storage of flow information, as each controller is responsible
by a subset of forwarding elements. However, this also aggravates the third challenge,
due to the need of further interactions between controllers to ensure a unified view of
the network [Sezer et al. 2013]. Nonetheless, since a distributed controller architecture
also improves availability by improving the system’s resiliency to failures, there have
been many proposals focused on improving the scalability of this approach. One example
is HyperFlow [Tootoonchian and Ganjali 2010], an NOX-oriented application that can in-
stalled on all network controllers to create a powerful event propagation system based on a
publish/subscribe messaging paradigm: basically, each controller publishes events related
to network changes to other controllers, which in turn replay those events to proactively
propagate the information throughout the whole control plane. Another strategy, adopted
in Onix [Koponen et al. 2010] and ONOS [Berde et al. 2014], consists in empowering
control applications with general APIs that facilitate access to network state information.
All things considered, ensuring scalability and state consistency among all controllers, as
well as a reasonable level of flexibility, ends up being an important design trade-off in
SDNs [Yeganeh et al. 2013].
Regarding security, the SDN technology brings both new opportunities and chal-
lenges (for a survey on both views, see [Scott-Hayward et al. 2013]). On the posi-
tive side, SDN can enhance network security when the control plane is seen as a tool
for packet monitoring and analysis that is able to propagate security policies (e.g.,
access control [Nayak et al. 2009]) along the entire network in response to attacks
[Scott-Hayward et al. 2013]. In addition, with the higher control over how the packets
are routed provided SDN, one can install security appliances such as firewalls and IDS
in any part of the network, not only on its edges [Gember et al. 2012]: as long as the
controllers steer the corresponding traffic to those nodes, the packets can be analyzed
and treated accordingly. This flexibility is, for example, at the core of the Software De-
fined Perimeter (SDP) concept [Bilger et al. 2013], by means of which all devices trying
to access a given network infrastructure must be authenticated and authorized before the
flow rules that allows its entrance are installed in the network’s forwarding elements. It
is also crucial to thwart denial-of-service (DoS) attacks, since then the task of discarding
malicious packets is not concentrated on one or a few security devices near the attack’s
target, but distributed along the network [YuHunag et al. 2010]. Another interesting ap-
plication of SDNs for thwarting DoS, as well as other threats targeting a same static IP
(e.g. port scanning or worm propagation), is to create the illusion of a “moving target”,
i.e., by having the SDN translate the host’s persistent address to different IPs over time
[Jafarian et al. 2012].
Whereas the security enhancements resulting from the SDN approach is com-
monly recognized, it also brings security risks that need to be addressed. In
[Kreutz et al. 2013], seven main threat vectors are identified, the first three being SDN-
specific: (1) attacks on control plane communications, especially when they are made
through insecure channels; (2) attacks on and vulnerabilities in controllers, (3) lack of
mechanisms to ensure trust between the controller and management applications; (4)
forged traffic flows; (5) attacks exploring vulnerabilities in switches; (6) attacks on and
vulnerabilities in administrative stations that access the SDN controllers, and (7) the
lack of trusted resources for forensics and remediation. Such threats usually require
holistic solutions providing authentication and authorization mechanisms for handling
the different entities configuring the network and detecting anomalies. This need is ad-
dressed, for example, by the FortNOX security kernel [Porras et al. 2012], as well as
by its successor, Security-Enhanced Floodlight [Porras et al. 2015], which enable au-
tomated security services while enforcing consistency of flow policies and role-based
authorization; it is also the focus of FRESCO [Shin et al. 2013], an application frame-
work that facilitates the development and deployment of security applications in Open-
Flow Networks. There are also solutions focused on specific issues, such as identify-
ing conflicts and inconsistencies between the policies defined by multiple applications
[Al-Shaer and Al-Haj 2010, Canini et al. 2012, Khurshid et al. 2013] or facilitating au-
diting and debugging [Handigol et al. 2012b, Khurshid et al. 2013]. Nevertheless, there
is much place for innovation in the field, as the number of articles proposing solutions for
SDN security issues are still considerably less prevalent in the literature than those focus-
ing on using the SDN paradigm to provide security services [Scott-Hayward et al. 2013].
1.4. Cloud Network Virtualization using SDN
This section discusses the connection between cloud computing and SDNs. We starting by
analyzing the synergy between the concepts and technologies involving both paradigms.
We then describe an integration architecture that takes advantage of this synergy for de-
ploying new services (namely, Network- and Security-as-a-Service), which may be inte-
grated with Network Function Virtualization (NFV) technologies.

1.4.1. Synergy between SDNs and clouds


As previously discussed, networking plays a key role in clouds, both as a shared resource
and as part of the infrastructure needed for sharing other computational resources. As any
infrastructure, the network in a cloud environment should attend some critical require-
ments of modern networks, in especial [Cheng et al. 2014]: adaptability to new appli-
cation needs, business policies, and traffic behavior, which new feature being integrated
with minimal disruption of the network operation; automation of network changes prop-
agation, reducing (error-prone) manual interventions; provision of high level abstractions
for easier network management, so administrators do not need to configure each indi-
vidual network element; capability of accommodating the nodes’ mobility and security
features as a core service, rather than as add-on solutions; and on-demanding scaling.
To fulfill these requirements, cutting-edge network equipments with advanced ca-
pabilities are likely to be needed. Nevertheless, the cloud cannot take full advantage of
such resources without orchestration engines with deep knowledge of the available net-
work capabilities. Unfortunately, such deep knowledge may require features that are very
specific to a given proprietary network hardware and software, leading to vendor lock-
in issues and, consequently, limiting the creation of new network features or services.
In addition, it is often the case that the services must be orchestrated over multi-carrier
and multi-technology communication infrastructure (e.g., composed by packet switching,
circuit switching and optical transport networks) [Autenrieth et al. 2013], or even among
different cloud providers [Mechtri et al. 2013]. SDNs tackle this issue by placing an ab-
straction layer with standard APIs over the (proprietary) hardware, facilitating the access
to the corresponding features and, thus, to innovations. In an environment as dynamic as
the cloud, the flexibility and programmability provided by SDNs is essential to allow the
network to evolve together with the services using it.

1.4.2. Integration Architectures


SDNs and clouds display similar designs, with a 3-layer architecture composed by a In-
frastructure Layer with computational resources controlled by a Control Layer, which in
turn is controlled via APIs by applications in an Application Layer (see Figure 1.5). One
simple form of integrating SDNs and clouds is to run their stacks in parallel, with both
technologies being integrated by the applications themselves. Even though applications
can benefit from both technologies with this strategy, it also brings a significant overhead
to application developers. After all, applications would need to be SDN- and cloud-aware,
assimilating and accessing APIs for both technologies in an effective manner, which is
prone to complicate their design and implementation.
To avoid these issues, an alternative approach would be to use a special cloud
Figure 1.5. SDN/Cloud Integration by applications. Adapted from [Autenrieth et al. 2013]

control/orchestration subsystem capable of controlling SDN devices directly, using SDN


data plane control protocols (e.g. OpenFlow) instead of a separate SDN controller. This
strategy, illustrated in Figure 1.6, brings some SDN benefits to the cloud infrastructure
while hiding its complexity to applications: they would need to use only cloud APIs,
remaining unaware of the SDN-enabled network infrastructure. Nevertheless, there are
also some drawbacks: as it demands a specialized cloud orchestrator, development of new
network control features is tied to the development of the orchestrator itself, or to the APIs
it provides, possibly limiting innovation It may also restrict the deployment of proprietary
SDN solutions.

Figure 1.6. SDN Functions incorporated in the Cloud Control/Orchestration subsystem

Finally, a third and probably preferable is to consider the cloud control/orchestra-


tion system as an SDN application to the SDN controller. In this scenario, depicted in
Figure 1.7, the Cloud Control/Orchestration subsystem is augmented with modules that
translate Cloud Operations to SDN operations, using existing SDN controllers APIs. This
approach brings the benefits of the second approach while allowing greater flexibility: it
is possible to evolve both the Cloud and SDN infrastructures separately, with minimal or
no changes to their integration interface. It would also allow the use of existing SDN so-
lutions without alterations, including proprietary SDN solutions or hardware-based con-
trollers.

Figure 1.7. SDN Integration inside the Cloud

1.4.3. Network as a Service (NaaS) supported by SDN


In a cloud computing environment, users (tenants) are provisioned with virtual compu-
tational resources, including virtual networks, by the cloud orchestrator. Usually, these
virtual networks are also used as an infrastructure for the other computational resources.
There are, however, some limitations with this approach [Costa et al. 2012]: little or no
control over the network; indirect access or management to the network infrastructure
(i.e., switches or routers); limited visibility over the network resources; inefficient over-
lay networks; no multicast support. To face these shortcomings, it is possible to share
networking resources as services, similarly to that is done with computational resources.
This model is called Network-as-a-Service (NaaS) [Costa et al. 2012].
In NaaS, networking resources are used and controlled through standard inter-
faces/APIs. In principle, a network service may represent any type of networking com-
ponent at different levels, including a domain consisting of a set of networks, a single
physical or virtual network, or an individual network node. Multiple network services can
then be combined into a composite inter-network service by means of a service compo-
sition mechanism [Duan 2014]. NaaS and SDN can, thus, be combined: while the SDN
technology provides dynamic and scalable network service management and facilitates
the implementation of NaaS, the latter allows a control mechanism over a (possibly het-
erogeneous) underlying network infrastructure. This approach also enables the creation
of richer NaaS services: if a given feature is not fully implemented on the underlying net-
work, it can be implemented as a SDN application inside the cloud control/orchestration
layer and offered to users via cloud APIs.
One possible solution for offering NaaS services through the combination of SDN
and Cloud Computing is the OpenNaaS project1 . It offers an open source framework for
helping creating different types of network services in an OpenStack cloud computing
environment. The framework provides a virtual representation of physical resources (e.g.,
networks, routers, switches, optical devices or computing servers), which can be mapped
inside OpenNaaS to SDN resources for the actual implementation of these networking
elements [Aznar et al. 2013].

1.4.4. Security as a Service (SecaaS) using SDN


Security as a Service (SecaaS) refers to the provision of security applications and ser-
vices via the cloud, either to cloud-based infrastructure and software, or to customers’
on-premise systems [CSA 2011]. To consume cloud security resources, end-users must
be aware of the nature and limitations of this new computing paradigm, whereas cloud
providers must take special care when offering security services. Aiming to provide guid-
ance for interested users and providers, the Cloud Security Alliance (CSA) has published
security guides that, based on academic results, industry needs and end-user surveys, dis-
cuss the main concerns that SecaaS applications must deal with [CSA 2011]:

• Identity and Access Management (IAM): refers to controls for identity verifica-
tion and access management.
• Data Loss Prevention: related to monitoring, protecting and verifying the secu-
rity of data at rest, in motion and in use.
• Web Security: real-time protection offered either on-premise, through soft-
ware/appliance installation, or via the cloud, by proxying or redirecting web traffic to
the cloud provider.
• Email Security: control over inbound and outbound email, protecting the orga-
nization from phishing or malicious attachments, as well as enforcing corporate polices
(e.g., acceptable use and spam prevention), and providing business continuity options.
• Security assessments: refers to third-party audits of cloud services or assess-
ments of on-premises systems.
• Intrusion Management: using pattern recognition to detect and react to statisti-
cally unusual events. This may include reconfiguring system components in real time to
stop or prevent an intrusion.
• Security Information and Event Management (SIEM): analysis of logs and event
information analysis aiming to provide real-time reporting and alerting on incidents that
may require intervention. The logs are likely to be kept in a manner that prevents tamper-
ing, thus enabling their use as evidence in any investigations.
• Encryption: providing data confidentiality by means of encryption algorithms.
1 Projet home page: http://opennaas.org
• Business Continuity and Disaster Recovery: refers to measures designed and
implemented to ensure operational resiliency in the event of service interruptions.
• Network Security: security services that allocate, access, distribute, monitor,
and protect the underlying resource services.

Security service solutions for the Internet can be commonly found nowadays, in
what constitutes a segmentation of the Software as a Service (SaaS) market. This can be
verified, for example, in sites that provide credit card payment services, that offer online
security scanning (e.g. anti-malware/anti-spam) to a user’s personal computer, or even
on Internet access providers that offer firewall services to its users. These solutions are
closely related to the above-mentioned Web Security, Email Security and Intrusion Man-
agement categories, and have as main vendors Cisco, McAfee, Panda Software, Syman-
tec, Trend Micro and VeriSign [Rouse 2010].
However, this kind of services has been deemed insufficient to attract the trust of
many security-aware end-users, especially those that have knowledge of cloud inner work-
ings or are in search of IaaS services. Aiming to attract this audience and, especially, to
improve cloud internal security requirements, organizations have been investing in SDN
solutions capable of improving security on (cloud) virtual networks. To cite a recent
example of cloud-oriented SDN firewall, we can mention Flowguard [Hu et al. 2014b]
Besides basic firewall features, Flowguard also provides a comprehensive framework for
facilitating detection and resolution of firewall policy violations in dynamic OpenFlow-
based networks: security policy violations can be detected in real time, when the network
status is updated, allowing a (tenant or cloud) administrators to decide whether to adopt
distinct security strategies for each network state [Hu et al. 2014b].
Another recent security solution is Ananta [Patel et al. 2013], an SDN-based load
balancer for large scale cloud computing environments. In a nutshell, the solution consists
of a layer-4 load balancer that, by placing one agent in every host, allows packet modifi-
cation tasks to be distributed along the network, thus improving scalability Finally, for the
purpose of detecting or preventing intrusions, one recent solution is the one introduced in
[Xiong 2014], which can be seen as an SDN-based defensive system for detection, anal-
ysis, and mitigation of anomalies. Specifically, the proposed solution takes advantage of
the flexibility, compatibility and programmability of SDN to propose a framework with
a Customized Detection Engine, Network Topology Finder, Source Tracer and further
user-developed security appliances, including protection against DDoS attacks.

1.4.5. Integration with Network Function Virtualization (NFV) Technologies


The specifications of NFV are being developed by the European Telecommunications
Standards Institute (ETSI). Their main goal is to transform the way that operators de-
sign their networks, by using standard IT virtualization technologies to consolidate net-
work equipments onto industry-standard high volume servers, switches and storage de-
vices, which could be located in data centers, network nodes or in the end-user premises
[ETSI 2012]. The idea behind NFV is to allow the implementation of network functions
(e.g., routing, firewalling, and load-balancing), normally deployed in proprietary boxes,
in the form of software that can run on standard server hardware. As any software, the
network function could then be instantiated in (or moved to) any location in the network,
as illustrated in Figure 1.8. The NFV Infrastructure (NFVI) could then provide computing
capabilities comparable to an those of an IaaS cloud computing model, as well as dynamic
network connectivity services similar to those provided by the NaaS concept discussed in
Section 1.4.3.

Figure 1.8. Concept of NFV [Jammal et al. 2014]

It is interesting to notice that, although the NFV and SDN concepts are considered
highly complementary, they do not dependent on each other [ETSI 2012]. Instead, both
approaches can be combined to promote innovation in the context of network: the ca-
pability of SDN to abstract and programmatically control network resources are features
that fit well to the need of NFV to create and manage a dynamic and on-demand network
environment with performance. This synergy between these concepts has lead ONF and
ETSI to work together with the common goal of evolving both approaches and provide a
structured environment for their development. Table 1.2 provides a comparison between
both SDN and NFV concepts.

Table 1.2. Comparison between SDN and NFV (Adapted from [Jammal et al. 2014]).
SDN NFV
Motivation Decoupling of control and data planes; Pro- Abstraction of network functions from dedi-
viding centralized controller and network cated hardware appliances to Commercial off-
programmability the-shelf (COTS) servers
Network Loca- Data centers Service provider networks
tion
Network Devices Servers and switches Servers and switches
Protocols OpenFlow Not Applicable
Applications Cloud orchestration and networking Firewalls, gateways, content delivery networks
Standardization Open Networking Forum (ONF) ETSI NFV group
Committee

1.5. Case Study with OpenDaylight and OpenStack


This section presents a a case study involving the creation and management of virtual
networks using the OpenDaylight SDN controller and the OpenStack cloud orchestra-
tion system, both open source projects widely adopted by academia and industry. After
presenting the main virtualization components provided by each system, as well as their
functions and interfaces, we propose a practical analysis toward their integration for build-
ing a consistent and fully functional cloud virtual networking environment.

1.5.1. OpenStack Cloud Operating System


OpenStack [OpenStack 2015] is a cloud computing project created in 2010, derived from
NASA RackSpace initiative. The project is part of an effort to create an open-source,
standards-based, highly-scalable, and advanced cloud computing platform that can be
deployed on commodity server hardware [OpenStack 2015, Wen et al. 2012]. All Open-
Stack code is, thus, open for anyone wishing to build and provide cloud computing ser-
vices, as well as to create applications on top of the platform.
In its current version (named Juno), the OpenStack platform is composed of 11
core components, listed and briefly described in Table 1.3.

Table 1.3. Components of OpenStack Juno and their functions.


Service Code-Name Description
Cloud Management Nova Controls the IaaS infrastructure, allocating or releasing computational resources,
such as network, authentication, among others.
Network Service Neutron Provisions network services for Nova-managed components. Specifically, al-
lows users to create and attach virtual Network Interface Cards (vNICs) to these
components.
Object Storage Swift Allows the storage and retrieval of files not mounted on directories. It is a long-
term storage system for static or permanent data.
Block Storage Cinder Provides block storage for VMs.
Identity Service Keystone Provides authentication and authorization to all OpenStack services.
Image Service Glance Provides a catalog and repository for virtual disk images managed by Nova.
Control Panel / Horizon Provides a web interface for all OpenStack components, allowing their manage-
Dashboard ment straight from a common web browser.
Accounting Ceilometer Monitors OpenStack components, providing metrics that can be used for gener-
ating usage statistics, billing or performance monitoring, among others.
Orchestration Heat Orchestrates the cloud, allowing the management of the other components from
a single interface
Database Trove Provides relational and non-relational databases to the cloud.
Elastic Map Reduce Sahara Creates and manages Hadoop clusters from a set of user-defined parameters,
such as Hadoop version, cluster topology, hardware details, among others.

1.5.2. OpenStack Neutron


Neutron is OpenStack’ component responsible for providing network services for tenant
infrastructures, virtualizing and managing network resources operated by other Open-
Stack modules (e.g., Nova’s computing services). Neutron implements the virtualization
layer over the network infrastructure in the OpenStack system, providing a pluggable,
scalable and API-driven system for managing networks. As such, it must ensure that
the network does not become the bottleneck in a cloud deployment and provision a self-
service environment for users. The main Neutron service capabilities are described as
follows.

• Provides flexible networking models, suiting the needs of different applications


or user groups. Standard models include flat networks or VLANs for separation of net-
works and traffic flows.
• Manages IP addresses, allowing dedicated static IPs or DHCP service. Floating
IPs allow traffic to be dynamically rerouted to any compute resource, so users can have
their traffic redirected during maintenance procedures or in the case of failures.
• Creates tenant networks, controls traffic and connects servers and devices to
one or more networks.
• Provides a pluggable back-end architecture that allow users to take advantage
of commodity gear or advanced networking services from third party vendors.
• Integrates with SDN technology such as OpenFlow, facilitating configuration
and management of large-scale multitenant networks.
• Provides an extension framework that allows the deployment of additional net-
work services, such as intrusion detection systems (IDS), load balancing, firewalls and
virtual private networks (VPN).

1.5.2.1. Components

The Neutron service comprises several components. To explain how these components
are deployed in an OpenStack environment, it is useful to consider a typical deployment
scenario with dedicated nodes for network, compute and control services, as shown in
Figure 1.9. The roles of each Neutron component illustrated in this figure are:

Figure 1.9. Neutron components in an OpenStack deployment with a dedicated


network node [OpenStack 2015].

• neutron-server: The Neutron server component provides the APIs for all net-
work services implemented by Neutron. In the deployment shown in Figure 1.9, this
component is located inside the cloud controller node, the host responsible to provide the
APIs for all the OpenStack services running inside the cloud through the API network.
The controller node can provide API access for both the other OpenStack and the end
users of the cloud, respectively through the management network and the internet.
• neutron-*-plugin-agent:Neutron plug-in agents implement the network services
provided by the Neutron API, such as layer 2 connectivity and firewall. The plug-ins are
distributed among network and compute nodes and provide different levels of network
services for the OpenStack cloud infrastructure.
• neutron-l3-agent: The Neutron L3 agent is the component that implements
Neutron API’s layer 3 connectivity services. It connects tenant VMs via layer 3 net-
works, including internal and external networks. The Neutron L3 agent is located on the
network node, which is connected to the Internet via the External network.
• neutron-dhcp-agent: The Neutron DHCP agent provides dynamic IP distribu-
tion for tenant networks. It also implements the floating IP service, which provides exter-
nal IP addresses for tenant VM, enabling Internet connectivity. It is also located on the
network node and connected to the Internet via the External network.

Figure 1.9 also depicts a standard deployment architecture for physical data center
networks, which are:

• Management Network: Provides internal communication between OpenStack


components. IP addresses on this network should be reachable only within the data center.
• Data Network: Provides VM data communication within the cloud deployment.
The IP addressing requirements of this network depend on the Networking plug-in being
used.
• External Network: Provides VMs with Internet access in some deployment sce-
narios. Anyone on the Internet can reach IP addresses on this network.
• API Network: Exposes all OpenStack APIs, including the Networking API, to
tenants. IP addresses on this network should be reachable by anyone on the Internet. The
API network might be the same as the External network, as it is possible to create an
external-network subnet that has allocated IP ranges that use less than the full range of IP
addresses in an IP block.

Besides these components and physical networks, Neutron makes use of virtu-
alized network elements such as virtual switches and virtual network interfaces to pro-
vide connectivity to tenant VMs. The concept of bridges is particularly important here:
bridges are instances of virtual switches implemented by a software such as Open vSwitch
[Open vSwitch 2015, Pfaff et al. 2009] and used to deploy network virtualization for ten-
ant VMs. There are three types of bridges created and managed by Neutron in an Open-
Stack deployment:

• br-int: Integration bridges provide connectivity among VMs running in the


same compute node, connecting them via their virtual network interfaces.
• br-tun: Tunneling bridges provide connectivity among different compute nodes
by using a layer 2 segmentation mechanism, such as VLAN [IEEE 2014], or a tunneling
protocol such as GRE [Farinacci et al. 2000] or VXLAN [Mahalingam et al. 2014a].
• br-ex: External bridges provide tenant VMs with connectivity to the external
network (Internet) by using Neutron routing services.

1.5.2.2. Communication Flows

The connectivity architecture provided by Neutron, with the relationships between VMs,
physical nodes and VMs, is illustrated in Figure 1.10. In this figure, it is assumed the
same deployment scenario presented in Figure 1.9, focusing on the network and compute
nodes of the OpenStack infrastructure.

Figure 1.10. Implementation of virtual networks using neutron and virtual switches.

To facilitate the discussion VMs’ communication flows, it is useful to separate the


explanation in three different scenarios, Intra-node communication, Inter-VM communi-
cation and Internet communication, which are described in what follows.

• Intra-node communication: The scenario where VMs communicate with each


other inside the same compute node (e.g., VM1-to-VM2). In this case, the traffic sent by
VM1 is forwarded to VM2 using the integration bridge (br-int). The forward rule in br-int
is configured inside the the compute node virtual switch’s flow table, using OpenFlow.
• Inter-VM communication: The scenario where VMs located different compute
nodes communicate with each other (e.g., VM1-to-VM3). In this case, VM1’s traffic
is sent to br-int, which forwards the traffic to the tunneling bridge (br-tun), where it is
encapsulated using a tunneling protocol (e.g., GRE or VXLAN) and sent through the data
network to the other compute node. When the traffic arrives at the destination compute
node, it is decapsulated by br-tun and forwarded to the br-int bridge, from which it is
finally send to VM3.
• Internet communication: The scenario where a VM communicates with the
outside world (e.g., VM1 to the Internet). Similarly to the previous scenario, in this case
the traffic originated in VM1 is sent to br-int, forwarded to br-tun for encapsulation with
a tunneling protocol, and sent via the data network to the network node. There, the traffic
is decapsulated on the network node’s br-tun and forwarded to the br-int bridge, which
makes use of the routing services provided by Neutron and forward the traffic to the
internet. This communication model involves at least one compute node and one network
node.

Other two important components of the Neutron networking are the network
node’s router and dhcp components (see Figure 1.10). They are implemented by means
of network namespace, which is a kernel facility that allows groups of processes to have
a network stack (interfaces, routing tables, iptables rules) distinct from that of the under-
lying host. More precisely, a Neutron router is a network namespace with a set of routing
tables and iptables rules that handle the routing between subnets, while the DHCP server
is an instance of the dnsmasq software running inside a network namespace, providing
dynamic IP distribution or floating IPs for tenant networks.

1.5.3. Neutron Plugin: Modular Layer 2 (ML2)


Neutron plugins are fundamental components in the OpenStack network architecture, as
they are responsible for implementing the entire set of network services provided by the
Neutron API. The plugins are segmented according to the different network services pro-
vided, with each service being implemented by a Neutron plugin. In Neutron, they can
be classified into two main categories: core plugins, which implement core layer 2 con-
nectivity services; and service plugins, which provide additional network services such as
load balancing, firewall and VPN.
Figure 1.11 shows Neutron’s control flow when handling network service re-
quests. After receiving a request from the Neutron API, a neutron plugin executes the
requested operation over the cloud network by using neutron plugin agents, hardware and
software appliances and external network controllers. Some of Neutron’s network plugins
are able to implement API requests using different back-ends, delegating the actual exe-
cution to drivers, such as the Modular Layer 2 (ML2), Firewall, Load Balancing and VPN
plugins. In particular, the ML2 plugin plays a key role in our the experimental scenario
hereby discussed, so it is worthy detailing it further.
The ML2 plugin is a framework that allows OpenStack Networking to simultane-
ously use the variety of layer 2 networking technologies found in real-world data centers.
It currently works with openvswitch, linuxbridge, and hyperv L2 agents, and is intended
to replace and deprecate the monolithic plugins associated with those L2 agents (more
precisely, starting with OpenStack’s Havana release, openvswitch and linuxbridge mono-
lithic plugins are being replaced by equivalent ML2 drivers). The ML2 framework is
also intended to simplify the task of adding support for new L2 networking technologies,
Figure 1.11. Execution workflow for Neutron network services

requiring less initial and ongoing effort than adding a new monolithic core plugin.

Figure 1.12. Overall architecture of Neutron ML2 plugin.

Figure 1.12 presents the overall architecture of the Neutron ML2 plugin. As the
name implies, the ML2 framework has a modular structure, composed of two different
set of drivers: one for the different network types (TypeDrivers) and another for the dif-
ferent mechanisms for accessing each network type, as multiple mechanisms can be used
simultaneously to access different ports of the same virtual network (MechanismDriver).
Mechanisms can access L2 agents via remote procedure calls (RPC) and/or use mecha-
nism drivers to interact with external devices or controllers.
• Type drivers: Each available network type is managed by an ML2 TypeDriver.
TypeDrivers maintain any needed type-specific network state, and perform provider net-
work validation and tenant network allocation. ML2 plugin currently includes drivers for
local, flat, VLAN, GRE and VXLAN network types.
• Mechanism drivers: The MechanismDriver is responsible for taking the in-
formation established by the TypeDriver and ensuring that it is properly applied, given
Figure 1.13. Overview of OpenDaylight functions and benefits

the specific networking mechanisms that have been enabled. The MechanismDriver in-
terface currently supports the creation, update, and deletion of network resources. For
every action taken on a resource, the mechanism driver exposes two methods: a precom-
mit method (called within the database transaction context) and a postcommit method
(called after the database transaction is complete). The precommit method is used by
mechanism drivers to validate the action being taken and make any required changes to
the mechanism driver’s private database, while the postcommit method is responsible for
appropriately pushing the change to the resource or to the entity responsible for applying
that change.
The ML2 plugin architecture facilitates the type drivers to support multiple net-
working technologies, and mechanism drivers to facilitate the access to the networking
configuration in a transactional model.

1.5.4. OpenDaylight SDN Controller


Created in April 2013 as a Linux Foundation collaborative project, OpenDaylight is an
open source OpenFlow controller and also a scalable SDN framework for the development
of several network services, including data plane protocols. As such, OpenDaylight can
be the core component of any SDN architecture. Figure 1.13 shows an overview of the
main functions and benefits provided by the OpenDaylight framework.
The OpenDaylight architecture follows the traditional SDN design, implementing
the control layer as well as the northbound and southbound interfaces. However, differ-
ently from the majority of controllers, the OpenDaylight architecture clearly separates its
design and implementation aspects. Figure 1.14 presents an overview of the OpenDay-
light architecture on the Helium release.
The OpenDaylight SDN controller is composed by the following architectural lay-
ers:

• Network Applications, Orchestration and Services: Business applications


that make use of the network services provided by the controller platform to implement
control, orchestration and management applications.
• Controller Platform: Control layer that provides interfaces for all the network
services implemented by the platform via a REST northbound API. The controller plat-
Figure 1.14. OpenDaylight architecture overview [Linux Foundation 2015]

form also implements a service abstraction layer (SAL), which provides a high-level view
of the data plane protocols to facilitate the development of control plane applications.
• Southbound Interfaces and Protocol Plugins: Southbound interfaces contain
the plugins that implement the protocols used for programming the data plane.
• Data Plane Elements: Physical and virtual network devices that compose the
data plane and are programmed via the southbound protocol plugins. The variety of
southbound protocols supported by the OpenDaylight controller allows the deployment
of network devices from different vendors in the underlying network infrastructure.

The service abstraction layer (SAL) is one of the main innovations of the Open-
Daylight architecture To enable communication between plugins, this message exchange
mechanism ignores the role of southbound and northbound plugins and builds upon
the definition of Consumer and Provider plugins (see Figure 1.15): providers are plu-
gins that expose features to applications and other plugins through its northbound API,
whereas consumers are components that make use of the features provided by one or more
Providers. This change implies that every plugin inside OpenDaylight can be seen as both
a provider and a consumer, depending only on the messaging flow between the plugins
involved.
In OpenDaylight, SAL is responsible for managing the messaging between all the
applications and underlying plugins. Figure 1.16 shows the life of a package inside the
OpenDaylight architecture, depicting the following steps:

1. A packet arriving at Switch1 is sent to the appropriate protocol plugin;


Figure 1.15. Communication between producer and consumer plugins using SAL.

Figure 1.16. Life of a package in OpenDaylight.


Figure 1.17. Execution flow for Neutron API request.

2. The plugin parses the packet and generates an event for SAL;
3. SAL dispatches the packet to the service plugins listening for DataPacket;
4. Module handles the packet and sends is out via the IDataPacketService;
5. SAL dispatches the packet to the southbound plugins listening for DataPacket;
6. OpenFlow message sent to appropriate switch

1.5.5. Integration Architecture


The OpenStack cloud orchestration system and OpenDaylight SDN controller are inte-
grated via a Neutron ML2 mechanism driver called ODL (acronym for OpenDayLight).
This driver executes the layer 2 connectivity services provided by the Neutron API, as
well as a Neutron service application running inside the OpenDaylight controller. The
controller, in its turn, makes use of other OpenDaylight service applications to perform
the requested actions over the layer 2 network.
Figure 1.17 illustrates the execution flow of a request from the Neutron L2 API
to the OpenDaylight controller. After receiving the request from Neutron L2 API, the
ML2 plugin selects the ODL mechanism driver based on Neutron static configuration
files and calls the driver’s methods with the parameters necessary for fulfilling the re-
quest. The ODL driver then implements the ML2 service methods by sending commands
to the OpenDaylight controller via a Neutron REST API. Neutron REST API extends the
OpenDaylight REST API to incorporate Neutron service features. Figure 1.18 depicts the
execution flow inside the SDN controller. Inside OpenDaylight, the request is forwarded
to the the Neutron service application, which in turn executes the necessary actions over
the data plane using the southbound protocol plugins, communicating with them via SAL.
The southbound protocol plugins implement the communication protocols necessary for
data plane programming, such as OpenFlow and OVSDB; for example, in the case of
OpenStack, this programming refers mainly to the Open vSwitch bridges detailed in Sec-
tion 1.5.2.
Figure 1.18. Execution flow for ODL requests inside the OpenDaylight controller.

Table 1.4. Hardware and software requirements for the OpenStack nodes.

1.5.6. Deploying Cloud Networking with OpenStack and OpenDaylight


In what follows, we conduct a simple and didactic experiment showing how to build an
architecture that bring the benefits of SDN to cloud computing systems. For this, we use
OpenStack and OpenDaylight, giving a practical example of the OpenStack networking in
an SDN architecture, analyzing the interactions between Neutron and the OpenDaylight
controller, as well as their specific roles in this deployment.
To reproduce the experiment hereby presented, the reader needs at least two phys-
ical or virtual servers for installing separate nodes for the OpenStack’s computing and
networking services. For convenience, the demo session already provides two VMs for
those who want to execute the experiment in their own computer during the presenta-
tion or at a later date. The software and hardware requirements for each server node
are presented in Table 1.4. It is important to notice that, in our deployment scenario,
the OpenStack controller services run in the same node as the network services (i.e., in
the Network Node) This is not strictly necessary, however, as it could run in any server
connected to the same subnet as the Network Node.
Figure 1.19. Network topology used in the experiment.

1.5.6.1. Bootstrapping the Compute and Network Nodes

As explained before, this experiment will be performed based on two VMs configured as
described in Table 1.4. Any virtualization system can be adopted to perform the experi-
ment in a virtualized environment, which means running the compute and network nodes
as VMs. The only important restriction is that both nodes should be connected in the same
local network. For the purpose of this demo, we assume the network configurations pre-
sented in Figure 1.19. Before proceeding with the experiment, it is important to execute
ping requests between the nodes, with the corresponding IP addresses, to ensure there is
connectivity between them.

1.5.6.2. Starting Open vSwitch

To proceed with the setup of OpenStack, we should start the Open vSwitch software
[Open vSwitch 2015], the switch virtualization software that is used to create the virtual
bridges for both compute and network nodes, as discussed in Section 1.5.2. During the
OpenStack startup process, Neutron makes use of the running Open vSwitch software to
build the necessary bridges in the server nodes. To run the Open vSwitch software in the
experiment’s servers, the following command should be executed on the terminal of both
compute and network nodes.
$ sudo /sbin/service openvswitch start

To verify that there is no existing bridges so far, the following command should
be run on the terminal of both compute and network nodes:
$ sudo ovs-vsctl show

The expected result is an empty list, indicating that the nodes have no bridge
configured. If that is not the case, the existing bridges should be removed. This can be
accomplished by running the following command, which deletes the bridge named br0
and all of its configurations:
$ sudo ovs-vsctl del-br br0
1.5.6.3. Running the OpenDaylight Controller

We are now ready to run the OpenDaylight SDN controller, which will be used by Neutron
to dynamically program the created virtual network bridges. As explained in Section
1.5.5, the SDN controller receives REST requests from the ODL driver, which implements
the methods called by ML2 plugin for implementing the layer 2 network services provided
by Neutron API. To run OpenDayligh, the following commands should be executed on
the Network and Controller node:
$ cd odl/opendaylight/
$ ./run.sh -XX:MaxPermSize=384m -virt ovsdb -of13

This command starts OpenDaylight, limiting the memory consumption to 384MB


(-XX:MaxPermSize=384m), and also allowing support for the version 1.3 of the Open-
Flow protocol (-of13). While running the controller, the terminal presents a dynamic log
register of all the controller operations, including bridge creation, flow configuration and
new service applications deployments.

1.5.6.4. Running DevStack

Now that we have the OpenvSwitch and the OpenDaylight SDN controller running, we
can start the OpenStack services on the compute node and on the network and controller
node. These nodes will then make use of these software resources to create the entire
virtual network infrastructure.
In this experiment, we run OpenStack through the Devstack project
[Devstack 2015], which consists of an installation script to get the whole stack of Open-
Stack services up and running. Created for development purposes, Devstack provides
a non-persistent cloud environment that supports the deployment, experimentation, de-
bugging and test of cloud applications. To start the necessary OpenStack services in our
deployment, the following commands should be executed on both compute and network
nodes.
$ cd devstack
$ ./stack.sh

The initialization can take a few minutes. The reason is that the script “stack.sh”
contains the setup commands to start all the specified OpenStack services in the local
node. For the purpose of this demo, the Devstack scripts located inside the nodes are
pre-configured to run network, controller and compute services inside the network node
and only compute services inside the compute node. For didactic purposes, we also start
compute services inside the network node. This allows us to have two compute nodes
in our deployment infrastructure, so we can distribute the instantiated VMs in different
servers over the layer 2 configuration. To verify that we have two compute nodes up and
running, the following command should be run on the network node:
$ . ./openrc admin demo
$ nova hypervisor-list
As a result of this command, the ID and the hostname of two hypervisors running
in both compute and network node should be shown.
After executing the Devstack script, we should see new log messages in the Open-
Daylight terminal, inside the network node. The messages correspond to the creation of
two virtual networks by Neutron using the virtual bridges. The networks created corre-
spond to the default private and public OpenStack networks, used to connect the VMs of
the default tenant in a private virtual LAN and to provide connectivity to the Internet for
those VMs. By running the following command inside each OpenStack node, we should
able to visualize the virtual bridges created by Neutron during the setup process with the
OpenDaylight controller.
$ sudo ovs-vsctl show

The results obtained should be different for the compute and the network nodes.
This happens because Neutron creates the external bridge (br-ex) only for the network
node, enabling the network node to provide Internet connectivity for cloud VMs.

1.5.6.5. Instantiating Virtual Machines in OpenStack

In this step, we instantiate two VMs for the default gateway of the OpenStack cloud.
Then, we analyze their communication through the virtual network architecture created
by Neutron. The following commands instantiate the VMs, named demo-vm1 and demo-
vm2:
$ nova boot --flavor m1.tiny --image cirros-0.3.1-x86_64-uec --nic
net-id=$(neutron net-list | grep private | awk ’{print $2}’) demo-vm1

$ nova boot --flavor m1.tiny --image cirros-0.3.1-x86_64-uec --nic


net-id=$(neutron net-list | grep private | awk ’{print $2}’) demo-vm2

To check the status of both VMs and verify if they were successfully created, the
following command should be used. The command should show that both VMs are active.
$ nova list

To verify the node where each VM is running, use the following command:
$ sudo virsh list

Now, to make sure that the created VMs are reachable in the network created by
Neutron, we can ping them from both the router and the dhcp servers created as compo-
nents of the virtual network infrastructure deployed in the network node. To be able to do
that, we should first run the following command on the network node:
$ ip netns

From the output of the previous command, we are able to identify the namespaces
of the qrouter and of the qdhcp services running inside the network node. The names-
paces are necessary to ping the VMs from both qrouter and qdhcp using the VMs’ private
network IPs, as illustrated by the following command.
qdhcp-3f0cfbd2-f23c-481a-8698-3b2dcb7c2657
qrouter-992e450a-875c-4721-9c82-606c283d4f92

$ sudo ip netns exec


qdhcp-3f0cfbd2-f23c-481a-8698-3b2dcb7c2657 ping 10.0.0.2

If everything was correctly configured, we should be able to successfully ping


the VM, since the dhcp instance is connected to them via the internal and the tunneling
bridges (br-int and br-tun).

1.5.6.6. Accessing the OpenDaylight GUI

Finally, we can visualize the network topology created by Neutron via the OpenDayligh
GUI (Graphical User Interface). To do that, the following url should be accessed from
any of the server nodes:
http://192.168.56.20:8080

The Open Daylight GUI should show all the network bridges (represented by net-
work devices) created and configured by Neutron using the controller. It is possible to
visualize the flow tables of each virtual bridge, as well as to insert and delete flows. The
OpenDaylight GUI also acts as a control point for the entire data plane elements supported
by the southbound API protocols, enabling monitoring, management and operation func-
tions over the network.

1.6. Final Considerations, Challenges and Perspectives


In this course we introduced the role of network virtualization on implementing and deliv-
ering cloud computing services. We presented the available network virtualization mech-
anisms and describe how they can address the cloud networking requirements. The advent
of SDN introduced new forms of approaching network virtualization and network control
inside and outside the cloud. Together with the NFV approach, SDN improved cloud net-
work control and accelerated the provision of innovative network services. To illustrate
the feasibility of integrating both cloud computing and SDN paradigms, we presented
a practical study of the deployment of OpenStack and OpenDaylight. Both OpenStack
and OpenDaylight are wide open source projects supported by the user and the industry
communities.
It is important to analyze the different challenges experienced when virtualizing
networks inside cloud data centers, as well to understand how SDN and available vir-
tualization mechanisms can help to face these challenges. The first main challenge on
deploying virtual network in cloud computing is to ensure predefined performance levels
for different tenant applications running inside the cloud. Cloud providers should be able
to provide specific network services for tenant applications such as bandwidth, predefined
in a service level agreement (SLA). Insufficient bandwidth can cause significant latency
on the interaction between users and the application, reducing the quality of the service
(QoS) provided to and by cloud tenants. The emergence of control models such as SDN,
together with hardware-assisted virtualization technologies such as SR-IOV, is expected
to improve the control capacity over the shared network resources. Efficient architectures
to integrate both control and virtualization technologies in the same cloud platform should
be developed aiming to address the control granularity necessary to provide different ser-
vice levels for different tenants. Ensure the flexible deployment of security appliances
on tenant network infrastructure is also a challenging task for cloud networking. Orga-
nizations usually deploy a variety of security appliances in their network, such as deep
packet inspection (DPI), intrusion detection systems (IDSs) and firewalls to protect their
valuable resources. These are often employed alongside other appliances that perform
load balancing, caching, and application acceleration. The network virtualization infras-
tructure should provide for cloud tenants the flexibility to deploy security appliances in-
side their cloud infrastructure. The use of network programmability provided by SDN
platforms and data plane protocols such as OpenFlow provides the capability to deliver
network security solutions as a service inside the cloud. SDN-based solutions can also
abstract the implementation of these security services, providing cloud agnostic solutions
and furthering standardization efforts.
SDN and NFV appliances can also address the challenges on policy enforcement
complexity. These policies define the configuration of each virtual and physical resource
in the cloud network. Traffic isolation and access control to end users are among the mul-
tiple forwarding policies that can be enforced by deploying SDN and NFV solutions in the
cloud. SDN also provides the framework to implement support for vendor-specific proto-
cols, addressing the challenge of building, operating, and interconnecting a cloud network
at scale. The need for rewriting or reconfiguring applications to address network related
constraints, such as the lack of a broadcast domain abstraction and the cloud-assigned
IP addresses for virtual servers, also represents a barrier for the adoption of cloud com-
puting. The separation between control plane and data plane provided by SDN enables
the development of network services that abstracts the underlying network implementa-
tion for cloud applications. Tunneling protocols such as VXLAN enables the L2 overlay
schemes over a L3 network, also providing transparency to build tenant networks inside
the cloud.
Dealing with different requirements for topology designs in cloud data centers in-
creases the management complexity of network configurations. Network topologies opti-
mized for communication among servers inside the data center are not the same topologies
for communication among servers and the Internet. The topology design also depends on
how L2 and/or L3 is utilizing the effective network capacity, and evolving the topology
based on traffic pattern changes also requires complex and dynamic configuration of L2
and L3 forwarding rules. Traffic monitoring, network intelligence and dynamic data plane
configuration are requirements that can be addressed by SDN control frameworks, inte-
grated to the cloud data centers through the control and/or the application layers. Topol-
ogy design and implementation are also key components to deal with the VM migration
challenges. Network appliances are typically tied to statically configured physical net-
works, which implicitly creates a location dependence constraint VMs. Compute node IP
address is usually based on the VLAN or the subnet to which it belongs, both configured
in the physical switch ports. Therefore, a VM can not be easily migrated across the cloud
network, decreasing the levels of flexibility and resource utilization. The centralization
of the control over the logical abstraction of the underlying network, both enabled by the
SDN paradigm, facilitates the IP management tasks in VM migration. Technologies such
as SR-IOV also provides the abstraction of shared virtualization mechanisms such as vir-
tual switches, approaching VM migration with mechanisms similar to those applied to the
migration of physical servers in a L3 network.
As observed in the last paragraphs and also during this course, the network vir-
tualization in cloud computing presents major challenges. Although, the new network
organization paradigm introduced by SDN, added to virtualization technologies such as
NFV and SR-IOV, provides not only an entire set of virtualization and control mech-
anisms to meet the challenges on cloud networking, but also a new model of thinking
about networking. Therefore, exploring SDN in cloud computing environments can be
seen as a promising path to the innovation in the network field.

References
[Al-Shaer and Al-Haj 2010] Al-Shaer, E. and Al-Haj, S. (2010). FlowChecker: Config-
uration analysis and verification of federated Openflow infrastructures. In Proc. of the
3rd ACM Workshop on Assurable and Usable Security Configuration (SafeConfig’10),
pages 37–44, New York, NY, USA. ACM.
[Alkmim et al. 2011] Alkmim, G., Batista, D., and Fonseca, N. (2011). Mapeamento de
redes virtuais em substratos de rede. In Anais do Simpósio Brasileiro de Redes de Com-
putadores e Sistemas Distribuídos – SBRC’2011, pages 45–58. Sociedade Brasileira de
Computação – SBC.
[Amazon 2014] Amazon (2014). Virtualization Types – Amazon Elastic Com-
pute Cloud. http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/
virtualization_types.html. Accessed: 2015-03-13.

[Anderson et al. 2005] Anderson, T., Peterson, L., Shenker, S., and Turner, J. (2005).
Overcoming the Internet impasse through virtualization. Computer, 38(4):34–41.
[Autenrieth et al. 2013] Autenrieth, A., Elbers, J.-P., Kaczmarek, P., and Kostecki, P.
(2013). Cloud orchestration with SDN/OpenFlow in carrier transport networks. In
15th Int. Conf. on Transparent Optical Networks (ICTON), pages 1–4.
[Aznar et al. 2013] Aznar, J., Jara, M., Rosello, A., Wilson, D., and Figuerola, S. (2013).
OpenNaaS based management solution for inter-data centers connectivity. In IEEE 5th
Int. Conf. on Cloud Computing Technology and Science (CloudCom), volume 2, pages
75–80.
[Barros et al. 2015] Barros, B., Iwaya, L., Andrade, E., Leal, R., Simplicio, M., Car-
valho, T., Mehes, A., and Näslund, M. (2015). Classifying security threats in cloud
networking. In Proc. of the 5th Int. Conf. on Cloud Computing and Services Science
(CLOSER’2015) (to appear). Springer.
[Bavier et al. 2006] Bavier, A., Feamster, N., Huang, M., Peterson, L., and Rexford, J.
(2006). In VINI veritas: Realistic and controlled network experimentation. In Proc. of
the 2006 Conf. on Applications, Technologies, Architectures, and Protocols for Com-
puter Communications (SIGCOMM’06), pages 3–14, New York, NY, USA. ACM.
[Berde et al. 2014] Berde, P., Gerola, M., Hart, J., Higuchi, Y., Kobayashi, M., Koide, T.,
Lantz, B., O’Connor, B., Radoslavov, P., Snow, W., and Parulkar, G. (2014). ONOS:
towards an open, distributed SDN OS. In Proc. of the 3rd Workshop on Hot topics in
software defined networking, pages 1–6. ACM.

[Berman et al. 2014] Berman, M., Chase, J., Landweber, L., Nakao, A., Ott, M., Ray-
chaudhuri, D., Ricci, R., and Seskar, I. (2014). GENI: A federated testbed for innova-
tive network experiments. Computer Networks, 61:5–23.

[Bilger et al. 2013] Bilger, B., Boehme, A., Flores, B., Schweitzer, J., and Islam, J.
(2013). Software Defined Perimeter. Cloud Security Alliance – CSA. https:
//cloudsecurityalliance.org/research/sdp/. Accessed: 2015-03-13.

[Braun and Menth 2014] Braun, W. and Menth, M. (2014). Software-defined network-
ing using OpenFlow: Protocols, applications and architectural design choices. Future
Internet, 6(2):302–336.

[Caesar et al. 2005] Caesar, M., Caldwell, D., Feamster, N., Rexford, J., Shaikh, A., and
van der Merwe, J. (2005). Design and implementation of a routing control platform.
In Proc. of the 2nd Symposium on Networked Systems Design & Implementation, vol-
ume 2, pages 15–28. USENIX Association.

[Canini et al. 2012] Canini, M., Venzano, D., Perešíni, P., Kostić, D., and Rexford, J.
(2012). A NICE way to test Openflow applications. In Proc. of the 9th USENIX Conf.
on Networked Systems Design and Implementation (NSDI’12), pages 10–10.

[Carapinha and Jiménez 2009] Carapinha, J. and Jiménez, J. (2009). Network virtual-
ization: a view from the bottom. In Proc. of the 1st ACM workshop on Virtualized
infrastructure systems and architectures, pages 73–80. ACM.

[Casado 2015] Casado, M. (2015). List of OpenFlow software projects. http://yuba.


stanford.edu/~casado/of-sw.html. Accessed: 2015-03-01.

[Casado et al. 2007] Casado, M., Freedman, M., Pettit, J., Luo, J., McKeown, N., and
Shenker, S. (2007). Ethane: Taking control of the enterprise. In Proc. of the 2007
Conference on Applications, Technologies, Architectures, and Protocols for Computer
Communications (SIGCOMM’07), pages 1–12, New York, NY, USA. ACM.

[Cheng et al. 2014] Cheng, Y., Ganti, V., Lubsey, V., Shekhar, M., and Swan, C.
(2014). Software-Defined Networking Rev. 2.0. White paper, Open Data Center
Alliance, Beaverton, OR, USA. http://www.opendatacenteralliance.org/
docs/software_defined_networking_master_usage_model_rev2.pdf.
Accessed: 2015-03-13.

[Chun et al. 2003] Chun, B., Culler, D., Roscoe, T., Bavier, A., Peterson, L., Wawrzo-
niak, M., and Bowman, M. (2003). PlanetLab: an overlay testbed for broad-coverage
services. ACM SIGCOMM Computer Communication Review, 33(3):3–12.
[Costa et al. 2012] Costa, P., Migliavacca, M., Pietzuch, P., and Wolf, A. (2012). NaaS:
Network-as-a-service in the cloud. In Proc. of the 2nd USENIX Conf. on Hot Topics in
Management of Internet, Cloud, and Enterprise Networks and Services (Hot-ICE’12),
pages 1–1, Berkeley, CA, USA. USENIX Association.
[CSA 2011] CSA (2011). SecaaS: Defined categories of service 2011. Technical report,
Cloud Security Alliance. https://downloads.cloudsecurityalliance.
org/initiatives/secaas/SecaaS_V1_0.pdf.
[Devstack 2015] Devstack (2015). DevStack - an OpenStack Community Pro-
duction. http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/
virtualization_types.html. Accessed: 2015-03-13.

[Duan 2014] Duan, Q. (2014). Network-as-a-service in software-defined networks for


end-to-end qos provisioning. In 23rd Wireless and Optical Communication Confer-
ence, (WOCC’2014), pages 1–5.
[Enns et al. 2011] Enns, R., Bjorklund, M., Schoenwaelder, J., and Bierman, A. (2011).
RFC 6241 – network configuration protocol (NETCONF). https://tools.
ietf.org/html/rfc6241.
[Erickson 2013] Erickson, D. (2013). The Beacon OpenFlow controller. In Proc. of the
2nd SIGCOMM Workshop on Hot topics in software defined networking, pages 13–18.
ACM.
[ETSI 2012] ETSI (2012). Network functions virtualisation: An introduction, benefits,
enablers, challenges & call for action. White paper, European Telecommunications
Standards Institute (ETSI). http://portal.etsi.org/NFV/NFV_White_Paper.
pdf.

[Farinacci et al. 2000] Farinacci, D., Li, T., Hanks, S., Meyer, D., and Traina, P. (2000).
RFC 2784 – generic routing encapsulation (GRE). https://tools.ietf.org/
html/rfc2784.
[Feamster et al. 2007] Feamster, N., Gao, L., and Rexford, J. (2007). How to lease
the internet in your spare time. ACM SIGCOMM Computer Communication Review,
37(1):61–64.
[Feamster et al. 2013] Feamster, N., Rexford, J., and Zegura, E. (2013). The road to
SDN. Queue, 11(12):20–40.
[Feamster et al. 2014] Feamster, N., Rexford, J., and Zegura, E. (2014). The road to
SDN: An intellectual history of programmable networks. SIGCOMM Comput. Com-
mun. Rev., 44(2):87–98.
[Floodlight 2015] Floodlight (2015). Floodlight OpenFlow Controller. http://www.
projectfloodlight.org/floodlight/. Accessed: 2015-03-01.

[Gember et al. 2012] Gember, A., Prabhu, P., Ghadiyali, Z., and Akella, A. (2012). To-
ward software-defined middlebox networking. In Proc. of the 11th ACM Workshop on
Hot Topics in Networks, HotNets-XI, pages 7–12, New York, NY, USA. ACM.
[Greenberg et al. 2005] Greenberg, A., Hjalmtysson, G., Maltz, D. A., Myers, A., Rex-
ford, J., Xie, G., Yan, H., Zhan, J., and Zhang, H. (2005). A clean slate 4d approach
to network control and management. ACM SIGCOMM Computer Communication Re-
view, 35(5):41–54.

[Greene 2009] Greene, K. (2009). TR10: Software-defined networking. Technology


Review (MIT).

[Gude et al. 2008] Gude, N., Koponen, T., Pettit, J., Pfaff, B., Casado, M., McKeown,
N., and Shenker, S. (2008). NOX: towards an operating system for networks. ACM
SIGCOMM Computer Communication Review, 38(3):105–110.

[Handigol et al. 2012a] Handigol, N., Heller, B., Jeyakumar, V., Lantz, B., and McKe-
own, N. (2012a). Reproducible network experiments using container-based emulation.
In Proc. of the 8th Int. Conf. on Emerging networking experiments and technologies,
pages 253–264. ACM.

[Handigol et al. 2012b] Handigol, N., Heller, B., Jeyakumar, V., Maziéres, D., and McK-
eown, N. (2012b). Where is the debugger for my software-defined network? In Proc.
of the 1st Workshop on Hot Topics in Software Defined Networks (HotSDN’12), pages
55–60, New York, NY, USA. ACM.

[Hu et al. 2014a] Hu, F., Hao, Q., and Bao, K. (2014a). A survey on software-defined
network and OpenFlow: From concept to implementation. IEEE Communications
Surveys & Tutorials, 16(4):2181–2206.

[Hu et al. 2014b] Hu, H., Han, W., Ahn, G.-J., and Zhao, Z. (2014b). FlowGuard: Build-
ing robust firewalls for software-defined networks. In Proc. of the 3rd Workshop on
Hot Topics in Software Defined Networking (HotSDN’14), pages 97–102, New York,
NY, USA. ACM.

[IEEE 2012a] IEEE (2012a). 802.1BR-2012 – IEEE standard for local and metropolitan
area networks–virtual bridged local area networks–bridge port extension. Technical
report, IEEE Computer Society.

[IEEE 2012b] IEEE (2012b). IEEE standard for local and metropolitan area networks–
media access control (MAC) bridges and virtual bridged local area networks–
amendment 21: Edge virtual bridging. IEEE Std 802.1Qbg-2012, pages 1–191.

[IEEE 2014] IEEE (2014). IEEE standard for local and metropolitan area networks–
bridges and bridged networks. IEEE Std 802.1Q-2014, pages 1–1832.

[Jafarian et al. 2012] Jafarian, J., Al-Shaer, E., and Duan, Q. (2012). Openflow ran-
dom host mutation: Transparent moving target defense using software defined net-
working. In Proc. of the 1st Workshop on Hot Topics in Software Defined Networks
(HotSDN’12), pages 127–132, New York, NY, USA. ACM.

[Jain and Paul 2013a] Jain, R. and Paul, S. (2013a). Network virtualization and software
defined networking for cloud computing: a survey. Communications Magazine, IEEE,
51(11):24–31.
[Jain and Paul 2013b] Jain, R. and Paul, S. (2013b). Network virtualization and software
defined networking for cloud computing: a survey. Communications Magazine, IEEE,
51(11):24–31.

[Jammal et al. 2014] Jammal, M., Singh, T., Shami, A., Asal, R., and Li, Y. (2014).
Software-defined networking: State of the art and research challenges. CoRR,
abs/1406.0124.

[Khurshid et al. 2013] Khurshid, A., Zou, X., Zhou, W., Caesar, M., and Godfrey, P.
(2013). VeriFlow: Verifying network-wide invariants in real time. In Proc. of the 10th
USENIX Conference on Networked Systems Design and Implementation (NSDI’13),
pages 15–28, Berkeley, CA, USA. USENIX Association.

[Kim et al. 2013] Kim, D., Gil, J.-M., Wang, G., and Kim, S.-H. (2013). Integrated sdn
and non-sdn network management approaches for future internet environment. In Mul-
timedia and Ubiquitous Engineering, pages 529–536. Springer.

[Kim and Feamster 2013] Kim, H. and Feamster, N. (2013). Improving network manage-
ment with software defined networking. Communications Magazine, IEEE, 51(2):114–
119.

[Koponen et al. 2014] Koponen, T., Amidon, K., Balland, P., Casado, M., Chanda, A.,
Fulton, B., Ganichev, I., Gross, J., Gude, N., Ingram, P., et al. (2014). Network virtu-
alization in multi-tenant datacenters. In USENIX NSDI.

[Koponen et al. 2010] Koponen, T., Casado, M., Gude, N., Stribling, J., Poutievski, L.,
Zhu, M., Ramanathan, R., Iwata, Y., Inoue, H., Hama, T., et al. (2010). Onix: A
distributed control platform for large-scale production networks. In OSDI, volume 10,
pages 1–6.

[Kotsovinos 2010] Kotsovinos, E. (2010). Virtualization: Blessing or Curse? Queue,


8(11):40:40–40:46.

[Kreutz et al. 2013] Kreutz, D., Ramos, F., and Verissimo, P. (2013). Towards secure and
dependable software-defined networks. In Proc. of the 2nd ACM SIGCOMM Workshop
on Hot Topics in Software Defined Networking (HotSDN’13), pages 55–60, New York,
NY, USA. ACM.

[Kreutz et al. 2014] Kreutz, D., Ramos, F. M. V., Veríssimo, P., Rothenberg, C. E.,
Azodolmolky, S., and Uhlig, S. (2014). Software-defined networking: A compre-
hensive survey. CoRR, abs/1406.0440.

[Lakshman et al. 2004] Lakshman, T., Nandagopal, T., Ramjee, R., Sabnani, K., and
Woo, T. (2004). The softrouter architecture. In Proc. ACM SIGCOMM Workshop
on Hot Topics in Networking, volume 2004.

[Lantz et al. 2010] Lantz, B., Heller, B., and McKeown, N. (2010). A network in a lap-
top: rapid prototyping for software-defined networks. In Proc. of the 9th ACM SIG-
COMM Workshop on Hot Topics in Networks, page 19. ACM.
[Lin et al. 2014] Lin, Y., Pitt, D., Hausheer, D., Johnson, E., and Lin, Y. (2014).
Software-defined networking: Standardization for cloud computing’s second wave.
Computer, 47(11):19–21.

[Linux Foundation 2015] Linux Foundation (2015). OpenDaylight, a Linux Foundation


Collaborative Project. http://www.opendaylight.org/. Accessed: 2015-03-01.

[Mahalingam et al. 2014a] Mahalingam, M., Dutt, D., Duda, K., Agarwal, P., Kreeger,
L., Sridhar, T., Bursell, M., and Wright, C. (2014a). Virtual extensible local area net-
work (VXLAN): A framework for overlaying virtualized layer 2 networks over layer
3 networks. Internet Req. Comments.

[Mahalingam et al. 2014b] Mahalingam, M., Dutt, D., Duda, K., Agarwal, P., Kreeger,
L., Sridhar, T., Bursell, M., and Wright, C. (2014b). Vxlan: A framework for overlay-
ing virtualized layer 2 networks over layer 3 networks. draft-mahalingam-dutt-dcops-
vxlan-08.

[McKeown et al. 2008] McKeown, N., Anderson, T., Balakrishnan, H., Parulkar, G., Pe-
terson, L., Rexford, J., Shenker, S., and Turner, J. (2008). OpenFlow: enabling in-
novation in campus networks. ACM SIGCOMM Computer Communication Review,
38(2):69–74.

[Mechtri et al. 2013] Mechtri, M., Houidi, I., Louati, W., and Zeghlache, D. (2013).
SDN for inter cloud networking. In IEEE SDN for Future Networks and Services
(SDN4FNS), pages 1–7.

[Medved et al. 2014] Medved, J., Varga, R., Tkacik, A., and Gray, K. (2014). Open-
Daylight: Towards a Model-Driven SDN Controller architecture. In 2014 IEEE 15th
International Symposium on, pages 1–6. IEEE.

[Mell and Grance 2011] Mell, P. and Grance, T. (2011). The nist definition of cloud
computing. Technical Report 800-145, National Institute of Standards and Technology
(NIST).

[Menascé 2005] Menascé, D. A. (2005). Virtualization: Concepts, applications, and per-


formance modeling. In CMG Conference, pages 407–414.

[Moreira et al. 2009] Moreira, M. D. D., Fernandes, N. C., Costa, L. H. M. K., and
Duarte, O. C. M. B. (2009). Internet do futuro: Um novo horizonte. Minicursos do
Simpósio Brasileiro de Redes de Computadores e Sistemas Distribuídos – SBRC’2009,
2009:1–59.

[Nayak et al. 2009] Nayak, A., Reimers, A., Feamster, N., and Clark, R. (2009). Reso-
nance: Dynamic access control for enterprise networks. In Proc. of the 1st ACM Work-
shop on Research on Enterprise Networking (WREN’09), pages 11–18, New York, NY,
USA. ACM.

[NOXRepo.org 2015] NOXRepo.org (2015). About NOX. http://www.noxrepo.


org/pox/about-nox/. Accessed: 2015-03-01.
[NOXRepo.org 2015] NOXRepo.org (2015). About pox. Accessed: 2015-03-01.

[Open Networking Foundation 2012] Open Networking Foundation (2012). Software-


Defined Networking: The New Norm for Networks. White paper, Open Networking
Foundation, Palo Alto, CA, USA.

[Open vSwitch 2015] Open vSwitch (2015). Production Quality, Multilayer Open Vir-
tual Switch. http://openvswitch.org/. Accessed: 2015-02-28.

[OpenContrail 2014] OpenContrail (2014). OpenContrail: An open-source network vir-


tualization platform for the cloud. http://www.opencontrail.org/. Accessed:
2015-02-28.

[OpenFlow 2009] OpenFlow (2009). Specification, openflow switch – v1.0.0.

[OpenFlow 2012] OpenFlow (2012). Specification, openflow switch – v1.3.0.

[OpenStack 2015] OpenStack (2015). OpenStack: Open source cloud computing soft-
ware. https://www.openstack.org/. Accessed: 2015-02-28.

[Pan et al. 2011] Pan, J., Paul, S., and Jain, R. (2011). A survey of the research on future
internet architectures. Communications Magazine, IEEE, 49(7):26–36.

[Pan and Wu 2009] Pan, L. and Wu, H. (2009). Smart trend-traversal: A low delay and
energy tag arbitration protocol for large rfid systems. In INFOCOM 2009, IEEE, pages
2571–2575. IEEE.

[Patel et al. 2013] Patel, P., Bansal, D., and Yuan, L. (2013). Ananta: Cloud scale load
balancing. In Proc. of SIGCOMM 2013, pages 207–218, New York, NY, USA. ACM.

[PCI-SIG 2010] PCI-SIG (2010). Single Root I/O Virtualization and Sharing 1.1
Specification. PCI-SIG. http://docs.aws.amazon.com/AWSEC2/latest/
UserGuide/virtualization_types.html. Accessed: 2015-03-13.

[Pfaff and Davie 2013] Pfaff, B. and Davie, B. (2013). The Open vSwitch Database Man-
agement Protocol. RFC Editor.

[Pfaff et al. 2009] Pfaff, B., Pettit, J., Amidon, K., Casado, M., Koponen, T., and
Shenker, S. (2009). Extending networking into the virtualization layer. In Hotnets.

[Porras et al. 2015] Porras, P., Cheung, S., Fong, M., Skinner, K., and Yegneswaran, V.
(2015). Securing the Software-Defined Network Control Layer. In Proc. of the 2015
Network and Distributed System Security Symposium (NDSS).

[Porras et al. 2012] Porras, P., Shin, S., Yegneswaran, V., Fong, M., Tyson, M., and Gu,
G. (2012). A security enforcement kernel for openflow networks. In Proc. of the 2st
Workshop on Hot Topics in Software Defined Networks (HotSDN’12), pages 121–126,
New York, NY, USA. ACM.

[Richardson and Ruby 2008] Richardson, L. and Ruby, S. (2008). RESTful web services.
" O’Reilly Media, Inc.".
[Rouse 2010] Rouse, M. (2010).Security as a Service (SaaS). http://
searchsecurity.techtarget.com/definition/Security-as-a-Service.
Accessed: 2015-03-01.

[Ryu 2015] Ryu (2015). Component-based software-defined networking framework.


http://osrg.github.io/ryu/. Accessed: 2015-03-01.

[Scott-Hayward et al. 2013] Scott-Hayward, S., O’Callaghan, G., and Sezer, S. (2013).
SDN security: A survey. In Future Networks and Services (SDN4FNS), 2013 IEEE
SDN for, pages 1–7.

[Sezer et al. 2013] Sezer, S., Scott-Hayward, S., Chouhan, P., Fraser, B., Lake, D.,
Finnegan, J., Viljoen, N., Miller, M., and Rao, N. (2013). Are we ready for SDN? Im-
plementation challenges for software-defined networks. Communications Magazine,
IEEE, 51(7):36–43.

[Sherwood et al. 2010] Sherwood, R., Gibb, G., Yap, K.-K., Appenzeller, G., Casado,
M., McKeown, N., and Parulkar, G. M. (2010). Can the production network be the
testbed? In OSDI, volume 10, pages 1–6.

[Shin et al. 2013] Shin, S., Porras, P., Yegneswaran, V., Fong, M., Gu, G., and Tyson,
M. (2013). FRESCO: Modular composable security services for software-defined net-
works. In 20th Annual Network and Distributed System Security Symposium (NDSS).
The Internet Society.

[Sridharan et al. 2011] Sridharan, M., Greenberg, A., Venkataramiah, N., Wang, Y.,
Duda, K., Ganga, I., Lin, G., Pearson, M., Thaler, P., and Tumuluri, C. (2011). Nvgre:
Network virtualization using generic routing encapsulation. IETF draft.

[Tootoonchian and Ganjali 2010] Tootoonchian, A. and Ganjali, Y. (2010). HyperFlow:


A distributed control plane for OpenFlow. In Proc. of the 2010 Internet Network Man-
agement Conference on Research on Enterprise Networking (INM/WREN’10), pages
1–6, Berkeley, CA, USA. USENIX Association.

[Turner and Taylor 2005] Turner, J. and Taylor, D. (2005). Diversifying the Internet. In
IEEE Global Telecommunications Conference (GLOBECOM’05), volume 2, pages 6–
pp.

[Wen et al. 2012] Wen, X., Gu, G., Li, Q., Gao, Y., and Zhang, X. (2012). Comparison
of open-source cloud management platforms: OpenStack and OpenNebula. In 2012
9th Int. Conf. on Fuzzy Systems and Knowledge Discovery, pages 2457–2461.

[Xiong 2014] Xiong, Z. (2014). An SDN-based IPS development framework in cloud


networking environment. Master’s thesis, Arizona State University, Arizona, USA.

[Yang et al. 2004] Yang, L., Dantu, R., Anderson, T., and Gopal, R. (2004). RFC 3746 –
forwarding and control element separation (ForCES) framework. https://tools.
ietf.org/html/rfc3746.
[Yeganeh et al. 2013] Yeganeh, S., Tootoonchian, A., and Ganjali, Y. (2013). On scala-
bility of software-defined networking. Communications Magazine, IEEE, 51(2):136–
141.

[YuHunag et al. 2010] YuHunag, C., MinChi, T., YaoTing, C., YuChieh, C., and YanRen,
C. (2010). A novel design for future on-demand service and security. In 12th IEEE
Int. Conf. on Communication Technology (ICCT), pages 385–388.

You might also like