Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

WHITE PAPER

A Software Defined
Network Architecture for
Transport Networks
Introduction
While SDN appears to have the potential to simplify
networks at the IP and Ethernet layers, service providers
have large capital and operational investments in their
transport networks as well. In order for SDN to be truly
useful in multi-domain, multi-vendor and multi-layer
networks, it needs to extend its control to include the
emerging next-generation converged optical transport
layer. This paper reviews the necessary capabilities required
for SDN-ready transport platforms, an open software
approach to implementing Transport SDN and the key
benefits to Service Providers of this new architecture.
A Software Defined Network (SDN) is a new architectural
approach that leverages the availability of massive Data Plane
compute resources now found in data centers around the This is the pathway through which packets
globe. SDN not only separates the control plane from will flow, entering the router or switch
at the ingress port, such as 10 Gigabit
the data plane but it fundamentally moves the control
Ethernet, and leaving the switch at the
plane off of discrete devices and into a high-powered
egress port. Each device has a Forwarding
data center compute environment in order to provide Information Base FIB table that contains at
a centralized view and control of the network. This new least two columns: a “match” field, which
centralized control plane runs inside a software entity is a simple binary pattern that the switch
known as an SDN controller, which can be thought uses to identify packets, and the “action”
of as a sort of middleware. It runs the algorithms that field, such as “forward this packet out
calculate the forwarding path for each flow of packets egress port for this match event.
and communicates this forwarding information to the
forwarding information base (FIB) of network devices Control Plane
below. It simultaneously supports applications running A control plane is effectively the set of
on top of application programming interfaces (API) such algorithms and protocols used to calculate
that the applications can discover and request network and populate the FIB, and thereby set up
the data plane paths for a given packet.
resources, either controlled by operations staff or in a
For switches the control plane is typically
fully automated fashion. While this concept of centralized
Ethernet learning—when the switch sees
control has been explored before, what is different an unknown packet it broadcasts the
this time is the advent of nearly unlimited virtualized packet ID to all switches. Routers use
compute horsepower to perform the calculations routing protocols like OSPF or BGP.
required for centralized control of a large-scale network
combined with simple Web 2.0 application programming
constructs and APIs that make it easier to develop Router and Switch Data Plane
applications. and Control Plane Design
Routers and switches initially had control
and data planes integrated, but in the
Today SDN is focused on devices that route and switch
late 1990s to provide Internet scale
packets at OSI Layers 2 and above; however, there vendors separated the data plane to run
is now an opportunity to extend SDN to transport. on custom ASICs and the control plane
This extension can help SDN showcase its real power algorithms to run on relatively low-
powered compute modules integrated
into the device. In spite of this separation,
the control plane remained integrated on
APPLICATION LAYER each device and thus distributed. These
devices were purposefully designed to
Business Applications be autonomous and therefore required
individual configuration. Therefore even
API API API
these “modern” control planes do not have
CONTROL LAYER a view of the entire network when making
SDN
Control path computation decisions.
Software Network Services

Control Data Plane Interface


(e.g., OpenFlow)

INFRASTRUCTURE LAYER
Network Device Network Device Network Device

Network Device Network Device

Figure 1: Definition of SDN from Open Networking Foundation white


paper Software-Defined Networking: The New Norm for Networks

Page 2
by managing multiple elements from multiple vendors across multiple layers of the network,
including the IP layer, the OTN switching layer and the optical transmission layer. This centralized
control and ability to consider the entire network state across multiple devices and layers when
making network decisions is one of the key value propositions of Transport SDN.

The Emergence of SDN

Several years ago a number of university research groups began to experiment with the idea of
disabling the distributed control plane in campus Ethernet switches so that the flow of packets
through these switches could be managed instead via centralized control. The idea was to be
able to quickly simulate the setup of experimental “slices” of the campus production network
and, once verified, to automatically configure all devices required to instantiate those slices. The
SDN approach worked, allowing quick and reliable deployment of experimental slices on the
production network, alleviating time consuming and error prone device-by-device configuration
tasks that could take down production traffic. OpenFlow was one of several “SDN protocols” that
sprang out of this effort, and while OpenFlow does not equal SDN, it has gained prominence.

The next stop for SDN was the data center. The emergence of Virtual Machines (VMs) resulted in
improved server efficiency but also increased complexity with tens of VMs per server and thus a
massive multiplication of network connections. VMs are connected to other VMs or users, and as
VMs were spawned, retired, and moved from server to server, changing network configurations
on each switch in the data path started to become unmanageable. Centralized SDN controllers
have demonstrated value by analyzing the entire data center network as an abstracted and
virtualized representation, simulating any network changes ahead of time and then automatically
configuring multiple switches when the change was activated.

More recently, architects have been experimenting with SDN in the wide area network (WAN).
The most well-known experiment to date is the deployment of central SDN control on Google’s
G-scale network, along with custom-built OpenFlow switches connecting its 12 data centers
worldwide.

Google’s OpenFlow WAN

Figure 2: From Google’s presentation on deploying SDN on their G-scale network as


presented by Urs Hoelzle, SVP Google at the April 2012 OpenFlow Summit.

Page 3
Google realized many benefits from this centralized network control, including:

• Faster and more deterministic convergence compared to a distributed control plane


• Increased operational efficiency, with links running at 95% vs. 25%.
• Rapid innovation supported by simulating the network in software before production service
deployment.

While Google’s G-scale network does not have the same amount of legacy equipment or
constraints as a traditional service provider’s, this is still an impressive accomplishment. What
Google did not do in this network experiment, however, was enable a dynamic SDN-controlled
transport layer. All of this packet traffic was carried over optical links using a static muxponder
approach.

Transport SDN: Extending SDN for Multi-Layer Control

While SDN appears to have the potential to simplify networks at the IP and Ethernet layers,
service providers have large capital and operational investments in their transport networks as
well. The real challenge is that operators need to efficiently run multilayer, multivendor and even
multi-domain networks, and there is no solution that coordinates and optimizes between these
various layers that has been successfully deployed in production service provider networks today.
Therefore what typically ends up happening in a multi-layer scenario is that all traffic is required
to pass through a router, and if the router needs more capacity at the transport layer, there is a
set of manual communications and processes that have to happen between the data department
and the transport department, which is time-consuming, inefficient, and costly.

IP Network Vendor X Vendor Y

Transport
Network Vendor W Vendor Z

Figure 3: Service provider networks are multi-layer, multi-vendor and multi-domain.

Page 4
A B C D
IP Routing
Layer
G H I
E F

Switching
Layer

Optical
Layer

Figure 4: SDN enables an abstract view of a multilayer network


as a single, flat representation.

By extending SDN to transport this management across layers and vendors can be automated
and all layers can be exposed as network resources to applications running on top of the SDN
controller. SDN can represent the reality of a multilayer network as a single virtualized abstract
view of the network sometimes referred to as an overlay network (see Figure 4). This allows either
service provider operations staff or applications directly to simply ask for a connection from point
A to point B with bandwidth X and quality of service Y without having to know all of the vagaries
and specifics of the multiple layers. All of the resources in the network at all layers are now
treated as pools that can be shared by any service or application.

The SDN controller, looking at this pool of virtualized resources across all layers, can calculate
the most cost-effective path to satisfy the constraints, simulate the change before it goes into
production and then automatically set up the service across multiple devices with little or no
human intervention. For example, the most cost-effective approach for a particular transport
service request may be to switch traffic at the transport layer so it does not even pass through a
core router. In another case it may be to send data through the core router and then to feed that
flow to the transport network. There may be a situation in which an application requests router
bandwidth, the router in turn needs more optical bandwidth to complete the provisioning of a
service, and the SDN controller automatically allocates that optical capacity.

Furthermore, there is now an opportunity for the network to better utilize multi-layer resiliency,
including shared mesh protection (SMP) available at the transport layer to further improve the
reliability and optimization of network resources. This could support lower latency restoration
paths and also reduce router over-provisioning due to non-deterministic failure scenarios.

Page 5
These sorts of use cases are the tip of the iceberg of the potential of extending SDN to the
transport delivering on a multi-layer, multi-vendor and multi-domain solution. The key benefits of
achieving this would be:

• Lower cost networks


• More efficient use of optical transport and router resources
• Faster provisioning of services and applications
• Reduction in operating expense across multiple layers
• Increased reliability with more deterministic and faster protection

Needed: SDN-Ready Transport Solution

In order to extend SDN to the transport layer, the platforms and solutions that are used to
build this layer must be able to virtualize resources, such as optical wavelengths, into pools of
resources and be fundamentally controllable by software. Infinera has implemented a unique
approach in the industry called Infinera Bandwidth Virtualization™ which does just that at the
transport layer. By leveraging the photonic integrated circuit (PIC) to provide massive DWDM
capacity combined with a large non-blocking 5 Tb/s OTN switch and an intelligent software
control plane, Bandwidth Virtualization allows point-and-click provisioning of “client-side”
services into pools of 100G optical “line-side” capacity. The OTN switch slices up the optical
resources into granular 1 Gb/s chunks and the software control plane represents each unit of
capacity as “available” or “in use” between any two neighboring nodes, hiding the complexity of
the actual underlying wavelengths.

• 200Gb/s
total
• 115Gb/s
in use

Figure  
Figure 5 5::  Bandwidth
Bandwidth   Virtualiza=on  
Virtualization leverages  
leverages an an  integrated  
integrated WDM WDM  
and OTN switch and  
solution withOTN  
GMPLS to
create a pool of optical capacity that can flexibly and efficiently accept client services.
switch  solu=on  with  GMPLS  to  create  a  pool  of  op=cal  capacity  that  can  accept  
any  client  service.  
Not  sure  where  the  original  source  of  this  graphic  is  located  

5  |  Infinera  Confiden-al  &  Proprietary  

Page 6
Muxponder
Old Client-side Line-side
Way
Rigid
Architecture

Client + Line Hardwired Universal


No switching, no programmability slots
(client
or line)

New Client-side module Line-side module 5 Tb/s


Switch
Way
Modular
Flexibility
500G
with Tx PIC Universal
Infinera slots

500G
Rx PIC

Any-to-any switch fabric


Fully programmable

Figure 6: The ability to switch any transport client services across a switch fabric into any optical
wavelengths is a key design principal to support Bandwidth Virtualization and an SDN-ready
transport architecture.

In contrast, most competing DWDM solutions today are highly static in nature, based on
transponder and muxponder approaches. Muxponders offer a client-side service port (e.g.
1x100GbE or 10x10GbE connected to a router or a switch port) that is in turn hardwired to a
line-side wavelength (e.g. 100G waves) inside a line card module (see Figure 6). This hardwired
approach results in many manual operations and engineering processes to deploy new services,
including a need to track each service that is mapped into specific wavelengths between every
pair of nodes as well as significant wavelength-by-wavelength engineering. Cascading from this
are requirements for truck rolls, manual patching and many additional operational costs.

Transport Services Mix (2017)


Majority Service Demands remain <100Gb

Sub 10G
10G

40G

100G

Source: Ovum, 2012

Figure 7: Ovum forecast for client side service ports showing that
10Gb/s and below dominate for many years

Page 7
This hardwired muxponder approach also results in a challenge with the growing divergence
between the wavelength bitrate, now at 100 Gb/s and moving toward 500 Gb/s and 1 Tb/s
super-channels, and the client services which will remain 10Gb/s or less for many years as forecast
by Ovum. This creates an impedance mismatch between the client-side 10Gb/s service and the
line-side 100Gb/s optical capacity that will result in tremendous inefficiencies with a muxponder-
based approach. ROADMs can help with network automation; however, they can only redirect
photons and therefore switch entire wavelengths in 100 Gb/s increments. They do not have the
electrical switching capability required to access sub-lambda transport services at 1 Gb/s, 2.5
Gb/s and 10 Gb/s and therefore cannot solve the impedance mismatch.

Bandwidth Virtualization solves for the general problem of the impedance mismatch
between client services and line-side transport and also provides the foundation for SDN
programmability.

In addition to Bandwidth Virtualization, three key functions that are possible to expose to
an SDN controller are Infinera Instant Bandwidth™, Infinera FlexCoherent™ and all-optical
ROADMs. Instant Bandwidth combines Infinera’s PIC-based 500 Gb/s super-channels with
DNA enhancements that allow providers to software-activate 100 Gb/s increments of line side
bandwidth on the same day that there is a revenue-generating service requiring the incremental
capacity. This function could be exposed to an SDN controller that could be pre-authorized
or prompt a network operations specialist when an additional 100G of capacity is required to
support a new revenue-generating service. Furthermore, Infinera’s FlexCoherent Processor
supports software controllable modulation selection that could be expose to SDN control such
as QPSK or BPSK, enabling customers to optimize for reach and fiber capacity. Finally, a ROADM
function could be implemented to automate optical express paths in the case where the 100G,
500G or 1T optical super-channels are filled sufficiently such that it may warrant bypassing the
digital switching function and staying completely at the optical layer with all-optical switching.

SDN controller SDN controller

IP
IP
IP/MPLS

MPLS
OTN
SDH
SONET

DWDM

MPLS/OTN/DWDM
OTN/WDM

Status Quo Phase 1 Phase 2


Network

Figure 8: In Phase 1, there is significant value in an SDN controlled OTN/WDM solution and this is
enhanced as lean MPLS capabilities are integrated in Phase 2.

Page 8
Packet Optical Integration

The value proposition of Transport SDN is further strengthened as lean packet capabilities are
integrated into a Packet Optical Transport Network (P-OTN) platform. Infinera’s DTN-X solution
today provides a PIC-based 500 Gb/s super-channel WDM function with up to 8 Tb/s of capacity
per fiber integrated with a 5 Terabit-per-bay non-blocking OTN switch in which, because of PICs,
both functions can operate at full capacity without compromise. With a full complement of client
interfaces from 1 GbE to 100 GbE, the OTN switching function provides a granular bandwidth
management function which is virtualized and presented to the SDN controller through Bandwidth
Virtualization, enabling a programmable transport infrastructure that can be efficiently managed by
SDN. In the future, MPLS packet switching will be integrated into the DTN-X platform, providing a
lean MPLS label switch router (LSR) function for additional bandwidth management efficiencies. This
function could be exposed to SDN control as well for further network optimization (see Figure 8).

Implementing Transport SDN

Infinera is providing leadership and driving collaboration with many other stakeholders on an
emerging approach to Transport SDN called the Open Transport Switch (OTS). The concept behind
OTS is that of a virtual software transport switch that can run either in a central data center or on
top of one or more converged WDM/OTN switching devices as long as those devices support
Bandwidth Virtualization or a similar abstraction of the optical wavelengths. The OTS connects
to any SDN controller via the OpenFlow protocol. It also includes some extensions to the SDN
controller using well know Web 2.0 protocols to support topology and resource discovery and
alarm and service monitoring, which are critical to manageable and carrier class transport networks
(see Figure 9).

Service & Service & Service &


App #1 App #2 App N

SDN Controller

OTS OTS
WDM/OTN WDM/OTN
/MPLS /MPLS
Ethernet Ethernet

Figure 9: OTS is software that exposes the programmable functions of a transport device with
Bandwidth Virtualization capabilities.

OTS has three key functions:

• Resource Discovery: this interface enables the applications running on the SDN controller
to gather network topology, resource information, and system capability. Based on this
information, the SDN controller may configure the transport system to set up tunnels, activate
Instant Bandwidth, etc.
Page 9
• Monitoring: SDN in transport requires carrier-class functions around monitoring services
and network alarms. Through this connection, network run-time conditions are passed to the
applications.
• Provisioning: this is for the applications to configure devices in order to set up or change traffic
connections. The default protocol is OpenFlow, although some further extensions are required
for transport.

OTS provides flexibility in how networks can be architected using SDN, including the ability to
leverage existing control plane capabilities like GMPLS in combination with SDN control. There
are three models that are possible in a Transport SDN scenario using OTS:

• Implicit Path Setup—edge nodes run OTS and interact with the SDN controller, which specifies
the path end points, bandwidth and QoS parameters, and the path is set up using an existing
control plane like GMPLS, so the network between the endpoints appears as a cloud.
• Explicit Path Setup—every transport node in the network is running OTS. The SDN controller
has visibility to every node in the network and explicitly specifies the path through each node
using OpenFlow or a similar protocol.
• Heterogeneous Path Setup—a network that uses a mix of Implicit Path Setup and Explicit Path
Setup.

SDN Controller

OTS LSR
OTS OTS POTN OTS
LSR
MPLS LSR POTN
OTS ENET
OTS
A LSR
B
GMPLS POTN ENET
Ethernet ENET
C D
POTN E ENET F

Implicit Path Set Up


(provision edge nodes only)

SDN Controller

OTS OTS OTS OTS


Enet Ethernet Enet OTN/MPLS/ Optical OTN/MPLS/ Enet Ethernet Enet
Switch DWDM DWDM Switch

Explicit Path Set Up


(provision every node)

Figure 10: Implicit and Explicit path set up models available with the
Open Transport Switch approach.

Page 10
Fundamentally, once you have a centralized SDN controller with a view of the entire network, the
controller can now make path setup decisions based on the needs of the applications running on
top of the controller. These applications, whether completely automated or run by humans, will
be able to request a path across the network and the SDN controller will deliver the most cost-
optimal result at the right layer with full automation.

Transport SDN: A Summary

In order for SDN to be truly useful in multi-domain, multi-vendor and multi-layer networks, it
needs to extend its control to include the emerging next-generation converged optical transport
layer, where integrated switching adds substantial network value and has significant impact
on overall network architecture, including what happens at higher layers. Extending SDN to
transport requires a hardware and software architecture that supports an ability to abstract optical
wavelengths into pools of optical capacity and to be able to map any service from 1 GbE up to
100 GbE into those pools. Infinera’s PIC-based super-channels create large, efficient pools of
DWDM bandwidth that are combined with 5 Tb/s of OTN switching and an intelligent control
plane to support Bandwidth Virtualization and provide this abstraction.

Leveraging this programmable, automated and open approach has clear benefits:

• Network scalability—deploy optical bandwidth quickly to support routers and other


application demands.

• Efficient resource utilization by optimizing the path taken through the network based on
application requirements and by establishing more deterministic failover scenarios to minimize
overprovisioning.

• Lower operating expense by automating the network across layers, eliminating device-by-
device configuration at the router layer and coordinating with the transport layer.

• Speed to revenue by allowing the network and any changes to be simulated in software ahead
of time and then, once finalized, to have that configuration radicaly pushed to the network.

Page 11
Infinera Corporation
140 Caspian Court
Sunnyvale, CA 94089 USA
Telephone: +1 408 572 5200
Fax: +1 408 572 5454
www.infinera.com

Have a question about Infinera’s products or services?


Please contact us via the email addresses below.

Americas: sales-am@infinera.com
Asia & Pacific Rim: sales-apac@infinera.com
Europe, Middle East,
and Africa: sales-emea@infinera.com
General E-Mail: info@infinera.com

www.infinera.com

Specifications subject to change without notice.

Document Number: WP-SDN-.11-2012


© Copyright 2012 Infinera Corporation. All rights reserved.
Infinera, Infinera DTN™, IQ™, Bandwidth Virtualization™, Digital Virtual Concatenation™ and
Infinera Digital Optical Network™ are trademarks of Infinera Corporation

You might also like