A Novel Stateful PCE-Cloud Based Control Architecture of Optical Networks For Cloud Services

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

COMMUNICATIONS SYSTEM DESIGN

A Novel Stateful PCE-Cloud Based Control


Architecture of Optical Networks for Cloud Services
Qin Panke1,2*, Chen Xue1, Wang Lei1, Wang Liqian1
1
State Key Lab of Information Photonics and Optical Communications, Beijing University of Posts and Telecommunications,
Beijing 100876, China
2
College of Computer Science and Technology, Henan Polytechnic University, Jiaozuo, Henan, 454000, China

Abstract: The next-generation optical network I. INTRODUCTION


is a service oriented network, which could be
delivered by utilizing the generalized multi- Strongly promoted by the leading industrial
protocol label switching (GMPLS) based con- companies, cloud computing becomes in-
trol plane to realize lots of intelligent features creasingly popular in recent years [1]. Cloud
such as rapid provisioning, automated protec- computing is Internet-based computing, where
tion and restoration (P&R), efficient resource shared resources (e.g., infrastructure, platform,
allocation, and support for different quality software, data, etc.) are provided to users
of service (QoS) requirements. In this paper, on-demand, like a public utility [2]. In addi-
we propose a novel stateful PCE-cloud (SPC) tion, the emerging cloud computing applica-
based architecture of GMPLS optical networks tions are presenting significantly different with
for cloud services. The cloud computing tech- the conventional Internet applications. For ex-
nologies (e.g. virtualization and parallel com- ample, cloud computing applications can dif-
puting) are applied to the construction of SPC fer with respect to granularity of traffic flows
for improving the reliability and maximizing and traffic characteristics such as required data
resource utilization. The functions of SPC and transaction bandwidth, acceptable delay and
GMPLS based control plane are expanded packet loss[3]. Therefore, cloud computing
according to the features of cloud services for networks not only need to satisfy the elastic
different QoS requirements. The architecture resources requirement of new applications but
and detailed description of the components of also support for different quality of service
SPC are provided. Different potential coopera- (QoS) requirements. The typical application
tion relationships between public stateful PCE scenarios of cloud networks are data center to
cloud (PSPC) and region stateful PCE cloud data center and end users to data center con-
(RSPC) are investigated. Moreover, we pres- nectivity [4-5], depicted in Fig 1. Based on a
ent the policy-enabled and constraint-based scan of aforementioned application scenarios,
routing scheme base on the cooperation of we have identified the requirements for cloud
PSPC and RSPC. Simulation results for veri- computing networks which should provide
fying the performance of routing and control functionality for efficient management of
plane reliability are analyzed. computational, storage and network resources.
Keywords: optical networks; control plane; Meanwhile, it is designed to address the fol-
GMPLS; stateful PCE; cloud computing; QoS lowing challenges: on-demand setup, response

117 China Communications • October 2015


time and latency sensitive, scalability, dynam- networks, which allow operators to control the
ic provisioning and reconfiguration, robustness network using software running on a network This paper presents
and security [5]. In practice this means that operating system within an external control- the motivation, design
cloud networks infrastructure has to be multi- ler, provide the maximum flexibility for the and evaluation of SPC
which is a novel state-
service which is able to support several types operator to control a network, and match the
ful PCE-cloud based
of traffic with different requirements in terms carrier’s preferences given its centralized ar- architecture of GMPLS
of QoS. chitecture, simplicity and manageability [14]. optical networks for
Optical networks enhanced with gener- However, in our opinion, the stateful PCE cloud services.
alized multi-protocol label switching (GM- and GMPLS control plane is more suitable
PLS) based protocols and path computation for cloud computing networks. Due to that it
elements (PCE) offer the opportunity to im- has experienced a decade of development and
prove the efficiency and flexibility of network standardization efforts and it is more mature
control and management [6]. Meanwhile, the than SDN/OpenFlow for optical networking,
ability to transfer huge data volumes with low for the moment.
latency has made optical networks the de facto However, with the optical networks de-
standard to connect data centers that provid- velop toward to larger scale, wider coverage
ing computing and storage services in cloud and multi-domain, conventional control plane
computing networks [5]. The optical networks with limited resources provision ability and
control plane base on GMPLS is composed poor reliability make policy-enabled and con-
of a set of communication and control net- straint-based path computation become more
work element entities [7]. It could realize the and more difficult [15]. Moreover, convention-
connection establishment, release, monitor al control plane cannot provide flexible and
and maintenance. In addition, it could realize scalable functions for different QoS require-
optical networks restoration, routing control, ment of cloud services.
signaling protocols, resource management and The main contributions of this paper are as
policy control. Consequently, GMPLS optical follows:
networks are able to meet the dynamic and 1) We propose a novel stateful PCE-cloud
ultra-high bandwidth requirements of stringent (SPC) based architecture of GMPLS optical
distributed applications delivered from the da- networks which is realized by centralizing
tacenters and end users. distributed stateful PCEs in multiple neighbor
Recently, much prior works have also fo- domains to form stateful PCE datacenter for
cused on the technology of GMPLS control providing powerful computing and storage
plane for providing more flexible network capability. We expanded the functions of tradi-
control and management. The proposed Hi- tional GMPLS control plane according to dif-
erarchical PCE architecture (HPA) [8, 9],
represents an attractive solution to enable both
effective domain sequence computation and
optimal end-to-end path computation. Zhang
et al [10-13] proposed dual routing engine
(DRE) PCE architecture which can not only
employ distributed control approaches to re-
alize fast routing and path establishment but
also provide centralized path computation
capabilities to accomplish effective resource
allocation and routing optimization with mul-
tiple constraints. Software defined networking
(SDN) and OpenFlow protocol is another
good choice of constructing cloud computing Fig.1 Cloud computing services networks

China Communications • October 2015 118


ferent QoS requirements of cloud services. In tion of the proposed SPC architecture which is
addition, we provide the components and fea- the improved centralized control plane of the
tures of the three layers architecture of SPC. GMPLS optical networks for cloud services.
Different potential cooperation relationships Next, we provide a detailed description of the
between public stateful PCE cloud (PSPC) components of PSPC and RSPC. Moreover,
and region stateful PCE cloud (RSPC) are in- we interpret the cooperation relationships be-
vestigated. Meanwhile, we provide the routing tween PSPC and RSPC.
scheme base on PSPC and RSPC. The SPC can be logically viewed in three
2) We apply the cloud computing technolo- layers, depicted in Fig.2. It is composed of
gy (e.g., virtualization, parallel computing and conventional control plane with path com-
shard storage) to SPC for providing elastic and puting element agents (PCEAs), PSPC and
on-demand resources provision. It could not RSPC. In this architecture the conventional
only realize optimal policy-enabled and con- distributed PCEs in multi-domains are re-
straint-based path computing but also resource placed by PCEA. The conventional control
effective allocation, dynamic configuration plane with PCEAs offers general routing func-
and maximal utilization. tions including link state advertisement, topol-
The rest of this paper is organized as fol- ogy update and so on. PCEA is the communi-
lows. The proposed SPC architecture is given cation agent of label switching router (LSRs)
in Section II. The routing scheme base on co- in the internal of a domain which delivers the
operation of PSPC and RSPC is presented in functions of communication between path
Section III. The network performance is sim- computation client (PCC) and RSPC with path
ulated in Section IV. Section V concludes the communication element protocol (PCEP). It
paper. collects the inter-domain information regard-
ing traffic engine, network state, the set of
II. STATEFUL PCE-CLOUD BASED active paths and their reserved resources from
ARCHITECTURE OF GMPLS OPTICAL the PCC of LSR. Moreover, it periodically no-
NETWORKS tifies the collected information to RSPC.
The conventional distributed stateful PCEs
in multi-domains are centralized and organized
2.1 Architecture of stateful PCE-
by cloud computing technology (e.g. virtual-
cloud
ization and parallel computing) to form RSPC
In this section, we provide an overall descrip- and PSPC for centralizing computing and
storage capacity. The PSPC and RSPCs coop-
erate with each other to response to the path
computing requests of end users. The RSPC
is responsible for the path computing request
of its management domains. The PSPC as-
sists the RSPC to accomplish path computing
when routing request beyond the management
domains of anyone RSPC or the single RSPC
with limited resources make it cannot meet the
requirements of requests.
The detailed architecture of SPC is illus-
trated in Fig.3. The PSPC and RSPC are all
comprised of control plane manager (CPM),
path computing element manager (PCEM),
task tracker (PCETT) and centralized shared
Fig.2 Logically view of SPC and GMPLS-based optical networks database (CSD). CPM is the manager of the

119 China Communications • October 2015


SPC and provides functions of communication
and system status monitor. Moreover, CPM
is responsible for maintaining the synchroni- PCEM
Routing Policy Controller Interface
PCETT
Routing
calculation LSPD
zation of TED and LSPD between PCCs and PM PR JM DCA Path
computation
SPC. The hardware and software resources Map/Reduce UA
constrain
CSO DR TED
are pooled by OpenStack module of CPM.
conditions
LoadBalance PC

Algorithms
Different physical and virtual resources are SSE BE CM MP

dynamically assigned and reassigned accord- OpenStack


VMs
CPM TDB

ing to the demands of requests by OpenStack. Public Stateful PCE-cloud

PCEM manages the tasks from users, performs

PCEP
PCEP
map-reduce operation for parallel computing Region Stateful

PCEP
Region Stateful

Control Plane
PCE-cloud PCE-cloud

according to specified routing policy and as-


signs computing tasks to PCETTs. PCETT PCEM PCETT
Routing Policy Controller Interface Routing
calculation LSPD
is the actual path computing unit. PCETTs PM PR JM
DCA Path
computation

collaborate with each other to processing the Map/Reduce


UA
constrain
CSO DR
conditions
path computing request through parallel com- PC TED

SSE BE LoadBalance MP Algorithms


puting technology. PCEM and PCETT run on
the virtual machines (VM) which dynamically OpenStack
VMs
CPM
TDB

allocated by the OpenStack. Region Stateful PCE-cloud

PCEP
PCEP
2.2 Region stateful PCE cloud PCEA PCEA

PCC PCC
In this subsection, as shown in Fig 4, we pro- PCC CC2 CC4 PCC
PCC
CC2 PCC
CC1 E-NNI
PCC
vide a detailed description of the RSPC which CC3
CC1
CC3

CCI
is mainly comprised of CPM, PCEM and CCI

E-NNI
PCETT divided by major function. NE2 NE2
NE4
NE1 I-NNI NE1 I-NNI
NE3 NE3

2.2.1 Control plane manager


CPM is the internal management unit of
RSPC. The distributed PCEs hardware re- Fig.3 Functional architecture of SPC and GMPLS optical networks
sources are pooled by OpenStack of CPM
based on cluster technology. The virtualization
of SPC has two meanings, on one hand, the
distributed hardware of PCEs are centralized
and pooled by SPC to a unified resource for
providing infinity computing and transparent
scalability ability. In other words, the virtual-
ization base this point means many to one. On
the other hand, the computing resources are
pooled to serve multiple consumers using a
multi-tenant model with different physical and
virtual resources dynamically assigned and re-
assigned according to user demands. There is a
sense of location independence in that the user  
generally has no control or knowledge over
the exact location of the provided resources
Fig.4 Detailed function modules of RSPC
but may be able to specify location at a higher
level of abstraction (e.g., different cloud com- alization base this point means one to many.
puting datacenter). In other words, the virtu- Accordingly, the proposed architecture could

China Communications • October 2015 120


provide elastic computing resources for users. tion (e.g. CPU and memory occupancy) and
CPM is comprised of six component modules. announce the CPM for unified management
The component modules are described as fol- (e.g., create new VMs, destroy excess VMs
lowing. or load migration between different hardware
PCEA is the communication interface or vPCE). Meanwhile, the function module
which offers the function of interacting be- of PCE DNS is responsible for the search and
tween inter-domain PCEA and PSPC with positioning the proper vPCE for CPM.
EPCEP. The messages include received path Routing policy choice and generator re-
computing requests, sent path computing re- ceives the routing policy tables from manager
sults, TED and LSPD. plane and PSPC. At the same time, it generates
OpenStack is the VM manager of the path computing policies for the PCEM and
RSPC. It dynamically creates VMs with spec- PCETT according to the routing policy tables.
ified functions (e.g., PCEM and PCETT) from
2.2.2 PCE manager
the resources pools according to the load of
applications or the special resources demands PCEM is the manager unit of RSPC for path
of end users. In contrast, it also destroys VMs computing tasks and PCETTs. As shown in
according to application load and recycling Fig 5, on one hand, it accepts the original path
resources back to resources pools. In addition, computing requests from PCEAs of different
it makes all PCE hardware servers work to- domains and divides the tasks to equivalent
gether as a whole base on cluster technology. fragments by mapping process according to
Moreover, it is also responsible for efficiently the workload of the tasks, or it combines the
multiplexes the hardware resources of a single calculation results from different PCETTs
server among VMs. Consequently, with the for the same tasks. On the other hand, it allo-
help of OpenStack that RSPC makes PCE re- cates the task fragments to multiple PCETTs
sources are pooled as a whole to serve multi- according to their load state. Moreover, it is
ple users with different physical and virtual re- responsible for notifying OpenStack to create
sources dynamically assigned and reassigned new PCTEE VMs from hardware resources
according to the demands of users. pool if PCETTs cannot satisfy path computing
PCE calculation load monitor and PCE requirements. In Fig 3, PCEM is comprised
domain name system (DNS) perform regular of three set modules. The component modules
statistical virtual PCE (vPCE) load informa- are described as following.

PCETT

MP
Process route
calculation for PCEM

RAM Job
new or
Manager DR
Job processing
sort processing
Manager new
combine() Update TDB
map()
partition()
RAM reduce
Task1
PCEM VM M1 Job Manager
Task2
outputfromat
Task3 reduce
Region 1 PCEM VM R1 Map/Reduce
Assgin
Region 2 PCETTS
Task4
PCEM VM M2 ……
Region n map
Task5 PCEM VM R2 Task X no
Task DB
Whole LSP Update TDB
…… Task Y
……
yes
……
PCEM VM MN PCEM VM RN Task Z
Update TED,
LSPD

Fig.5 Task process flow chart of PCEM

121 China Communications • October 2015


Controller includes job manager (JM), PCEM.
Map/Reduce and load balancer (LB) mod-
2.2.4 Shared storage
ules. LB regularly counts load information of
PCEM and notifies CPM for unified manage- Shard storage is the centralized database of
ment. JM manager is responsible for judging RSPC. It stores the network topology (i.e., TE
the type of the task (e.g., intra-domain or links and nodes), network resource informa-
inter-domain path computing), updating TDB tion (i.e., TE attributes), the set of active paths
and assigning tasks to PCETTs cooperate with and their reserved resources. It also maintains
LB. information regarding LSPs under construc-
Routing includes policy parser (PP), path tion in order to reduce churn and resource
provisioning (PR), dynamic routing (DR) contention. Furthermore, it stores all the task
and bandwidth engine (BE) modules. The PP information and states under processing by
module deals with all kinds of routing pol- the RSPC. All of PCEMs and PCETTs could
icies which are great important to solve the access the shard storage by unified interface.
special multi-constraint routing requests. The
2.3 Public stateful PCE cloud
PR module calculates the offline routes for
all foreseen connections according to a traffic The composition and structure only have one
matrix. The DR module evaluates the route different between RSPC and PSPC which is
for a single LSP request at a time, expressed the set of controller. The controller of PSPC
in terms of source and destination nodes and has region stateful PCE cloud manager (RSP-
bandwidth requirements. The BE module is CM) for managing all of the RSPCs. The
invoked to make the higher-priority LSP re- RSPCM is used for monitoring the load state
quired bandwidth available. of the RSPCs and assisting RSPC to perform
Interface includes communication agent path computing. The PSPC holds the TED
(CA), message parser (MP) and database and LSPD of gateways consisting of boundary
agent (DA) modules. The CA is the external nodes (BNs) and inter-domain links but does
communication interface to interact with re- not hold the topology map of connectivity of
lated PCEA and CMP of PSPC. The MP is re- interior BNs. However, it could request and
sponsible to parse the different messages from maintain partial intra-domain information
CA and deliver them to corresponding process when collaborate with RSPCs for fast cross-re-
modules. The DA module is invoked to access gion path computing. Nonetheless, the RSPCs
to database (e.g., TED, LSPD and TDB). are insensible of the integral topology outside
its own control domains.
2.2.3 PCE task tracker
Moreover, the cooperation relationship be-
PCETT is the path computing execution unit tween PSPC and RSPC is a key issue of SPC,
of the RSPC. It performs path computing which helps to achieve maximal resources
tasks allocated by PCEM. In Fig 3, PCETT is utilization, load balancing, policy-enabled and
comprised of two set modules. The component constraint-based path computing.
modules are described as following.
Routing calculation includes policy parser Table I Policy-enabled and constraint-based path computation instances
(PP) and path computing (PC). The PP module Policy application scenarios Description
deals with all kinds of routing policies which Load balancing Balance the traffic load of whole network
are great importance to solve multi-constraint Maximizes utilization as long as further gain in uti-
routing. The PC module performs path com- Max-min fair share lization is not achieved by penalizing fair share of
puting according to routing policy. applications
Interface includes communication agent Policy configured paths Centrally administer configured paths
(CA), message parser (MP) and database agent Provider selection policy To be applied in multi-provider topologies
(DA) modules which functions are the same as Policy based constraints Provide constraints in a path computation request

China Communications • October 2015 122


Firstly, the PSPC cooperates with RSPCs for RSPCs to enhance the routing security and
to realize cross multiple regions path com- reliability, since they both satisfy the demands
puting. In addition, the PSPC provides global of path computation from customers. The path
policy-enabled and constraint-based proposal computing tasks could be switched to PSPC as
for RSPCs. In order to get the optimal routing soon as the RSPCs failure.
path the control plane should involve global Fourthly, the PSPC could cooperate with
networks topology and current traffic engi- RSPCs in multi-layer and multi-region mode,
neering. However, PSPC is more suitable than it is necessary for RSPCs to collaborate with
RSPC for the domain sequence selection and PSPC when the requirements for path com-
global routing policies generator. In contrast, putation happen in multi-layer and multi-re-
RSPC is more suitable than PSPC for the in- gion optical networks. PSPC could be shared
tra-domain path computing [16]. Consequent- among several regions or layers and make
ly, the optimal end-to-end path computing the best use of the inter-region and inter-layer
should be got from the cooperation between network resources, so it is more suitable for
PSPC and RSPC. Table I lists some typical inter-region or inter-layer path computation.
policy-enable and constraint-based applica-
tion instances that may be exerted to the SPC III. ROUTING SCHEME BASED ON SPC
architecture. Effective combinations of the
above scenarios as well as possible new sce- In this section, we provide a detailed descrip-
narios could occur in the real networks. tion of the routing scheme of SPC based GM-
Secondly, the PSPC delivers external com- PLS optical networks.
puting and storage resources for RSPCs, there- The routing procedure not only includes
fore, make RSPCs could accommodate the path computation, but also includes policies
bursty nature of the traffic. and constrains aware. All of them are accom-
Thirdly, the PSPC delivers security backup plished by the collaboration of PSPC and
for RSPCs. The cooperation mechanisms make RSPCs. On the one hand, the routing policies
it possible that PSPC could provide backup are generated by manager plane and the path

Load Balancer
PCEA CPM PCEM-X PCEM-Y PCETT-M PCETT-N
& PCE DNS

Computation load
High computation load
High computation load
Start PCEM-Y
Start PCEM-Y
Start PCETT-N
Start PCETT-N
PCEM-Y started
PCEM-Y started
PCETT-N started
PCETT-N started
Add PCEM-Y Type judgement
Add PCETT-N Task record
PC request Map process
PCEM request PCETT selection
PCEM-Y

PCETT request
Policy-enable PCETT-M
PCETT-N PC request
Constrain-based
fragment-1
PC request
Task update fragment-2
Reduce process PC result
fragment-1 PC result
fragment-2
PC result PC result

Fig.6 Path computing procedure

123 China Communications • October 2015


computation constrains are proposed by end responding to one of the vPCEs. Finally, the
users or services. Different policies will be path computation session will be addressed by
obtained by different part of the control which the assigned PCEM and PCETTs.
means PSPC obtains the global routing poli-
cies and RSPC obtains the region routing poli- IV. PERFORMANCE AND ANALYSIS
cies. In light of this, PSPC is more suitable for
global routing policies assignment and RSPC To evaluate the performance of the SPC archi-
is suitable for routing policies analysis and tecture, we use disperse event simulation tool
implements. OMNet++ to develop SPC based architecture
On the other hand, RSPC is responsible of GMPLS optical networks. As show in Fig
for the path computing from all the domains 7, the simulated network topology contains 6
which it controls. PSPC is responsible for the domains and 46 nodes, and each link contains
path computing requests which across mul- 16 wavelengths. The simulation supposes that
tiple RSPCs control regions. The dynamic the request arrive between any node pair (s,
path computing requests are sent to RSPC by d) following Poisson process with arrival rate
PCEAs. RSPC maintains the traffic informa- of λ(s, d). The holding time of each call is ex-
tion under its control domains. It can meet the ponentially distributed with unit mean. Four
path computing requests which source and typical PCE architectures are selected in the
destination nodes are all in its control region. comparison DPA, HPA, DRE and SPC.
However, if the source and destination nodes In order to guarantee the sum of the com-
are out of the control region of a single RSPC, puting resources of all PCEs in the simulation
the request will be send to PSPC. PSPC firstly work is identical, we assign 16 PCE servers
calculates the sequence of the RSPCs which for the four architectures. Each domain is de-
the routing across. Then PSPC parallel sends ployed with two PCE servers in DPA. A PCE
path computing requests to all of the RSPCs server is assigned for inter-domain path com-
in the sequence. Finally, it’s also responsible puting which is the pPCE, the NORDUNet
for assembling the segments to the whole path domain is deployed with one PCE while each
when it gets all of the segment results. domain of the rest deployed with two PCEs
The messages exchanged between the dif- which are the cPCEs in HPA. As for DRE each
ferent elements of the RSPC in path computa- PCE is divided into group engine and unit
tion are displayed in Fig 6. It can be observed, engine. All PCEs are put together to form one
that the CPM is responsible for checking the PSPC and two RSPCs for centralized comput-
different quality parameters to the deployed
vPCE which is PCEM or PCETT. Once these
quality parameters are received, the CPM is NORDUNet Pionier

the responsible to determine whether a new


vPCE is required. If a vPCE is selected to be
deployed, the OpenStack module will deploy
a new virtual machine with the vPCE image, DFN
will assign a new IP address to the vPCE and
once the vPCE is started, the CPM will notify
the new vPCE IP address to the load balancer
GEANT2
and PCE DNS module. Once a PCEA requires
RedlRlS
a new path computation, CPM module will
first issue a DNS query to the load balancer &
GARR
PCE DNS module. The load balancer & PCE
DNS is responsible to load balance the differ-
ent vPCEs, so returns a single IP address cor- Fig.7 MUPBED network topology

China Communications • October 2015 124


ing resources providing in SPC. Furthermore, routing request between total and inter-do-
both PCEM and PCETT VMs in the cloud main.
computing are created or destroyed according From Fig 8 it can be seen that the path pro-
to users demands. Suppose all the service re- visioning latency increases with the increase
quests are lambda LSPs, and the LSR do not of traffic load for all four architectures. How-
have the capability of wavelength conversion ever, the performance of SPC gets better when
which means that wavelength continuity con- the ratio of intra-domain and inter-domain
straint is considered. decrease, which are illustrated in Fig 8(b) and
Cloud services in datacenter to datacen- 8(c). The reason is that SPC with virtualiza-
ter connectivity networks mainly include tion and parallel computing technology makes
high-availability cluster services, data backup, better in computationally intensive cross
service migration, load balancing (virtual ma- multi-domain path computing tasks. It shows
chines migration) and application mobility et that the SPC could provide elastic resources
al. Based on a scan of aforementioned cloud provision according to the QoS requirement of
services scenarios, we have identified the different services and accommodate to differ-
requirements for GMPLS optical networks ent application scenarios.
which should provide the abilities of fast In order to verify the reliability of SPC we
cross-domain routing, low blocking proba- evaluate the blocking probability and path
bility, elastic resources provision and high provision latency of different PCE architecture
reliability. Accordingly, the simulations are in path computing. Two simulation scenarios
performed in response to the above four points of different number of fault PCE server have
for cloud services. been conducted, which is 1 and 2. In addition,
In order to verify the routing performance this simulation also reflects the computing re-
of SPC the first simulation we evaluate the sources occupied load balancing performance
blocking probability and path provisioning of SPC from the comparison of path provision
latency of different PCE architecture in path latency between different scenarios. From Fig
computing. Three simulation scenarios of 9 and 10 it can be seen that the performance
different ratios between intra-domain and of SPC with virtualization and load balancing
inter-domain routing requests have been con- technology are not affected when individual
ducted, which are 9:1, 1:1, and 1:9. In addi- PCE services failure. But the performance of
tion, the performance of elastic resources pro- the other architectures have greatly affected.
vision for different QoS requirement of cloud So the SPC significantly outperforms the other
services is verified from the different ratio of architectures in reliability. These results illus-

(a) (b) (c)

Fig.8 Path provision latency (a) intra-domain and inter-domain ratios is 9:1(b) intra-domain and inter-domain ratios is 1:1 (c) with in-
tra-domain and inter-domain ratios is 1:9

125 China Communications • October 2015


trate that the SPC is a promising solution for
routing in the next generation dynamic GM-
PLS networks for cloud services.

V. CONCLUSION

This paper presents the motivation, design and


evaluation of SPC which is a novel stateful (a) (b)

PCE-cloud based architecture of GMPLS opti-


cal networks for cloud services. It is proposed
Fig.9 Blocking probability (a) one PCE server faulty (b) two PCE servers faulty
for datacenter to datacenter and end users to
data center connectivity and designed to meet
the QoS requirement of cloud services, ad-
dress the policy-enabled and constraint-based
routing problems of larger scale and multi-do-
main optical networks. Moreover, another
prime design target of the architecture is for
improving resources utilization and system
reliability of GMPLS control plane.
The proposed three layers architecture of
(a) (b)
stateful PCE-cloud and GMPLS based control
plane delivering on the aforementioned goals
from centralizing distributed PCEs in multiple Fig.10 Path provisioning (a) one PCE server faulty (b) two PCE servers faulty
neighbor domains to form resources pools and
cloud computing technologies are applied to
construct SPC. We expanded the functions ACKNOWLEDGEMENTS
of conventional control plane for supporting
multiple routing policies and constrains parse. This study is supported by National Natural
Moreover, unified scalable function interfaces Science Foundation of China(No. 61571061)
are designed for future transparent extension. and Innovative Research Fund of Beijing
Accordingly, the SPC based control plane University of Posts and Telecommunications
could provide elastic resources for different (2015RC16).
QoS requirements of end users and adequate
References
for achieving resources utilization maximiza-
[1] Kumar P, Sehgal V, Chauhan D S, et al. Clouds:
tion and reliability improvement. As can be Concept to optimize the Quality of Service
known from the simulation results, the SPC (QOS) for clusters[C]. Information and Com-
could effectively address the multiple policies munication Technologies (WICT), 2011 World
and constraints based path computing and Congress on. IEEE, 2011: 816-821.
[2] Foster I, Zhao Y, Raicu I, et al. Cloud computing
optimal domain sequence selection problems and grid computing 360-degree compared[C].
of multi-domain and larger scale optical net- Grid Computing Environments Workshop, 2008.
works. The improvement of routing perfor- GCE’08. Ieee, 2008: 1-10.
mance is obvious when the requested quantity [3] Sharkh M A, Jammal M, Shami A, et al. Resource
allocation in a network-based cloud computing
of inter-domain is much larger than intra-do- environment: design challenges[J]. Communica-
main. Furthermore, the SPC significantly out- tions Magazine, IEEE, 2013, 51(11): 46-52.
performs the other architectures in reliability [4] Manvi S S, Krishna Shyam G. Resource mana-
is verified when PCE servers random and burst gement for Infrastructure as a Service (IaaS)
in cloud computing: A survey[J]. Journal of
malfunction. Network and Computer Applications, 2014, 41:

China Communications • October 2015 126


424-440. rence. Optical Society of America, 2012: PDP5D.
[5] Develder C, De Leenheer M, Dhoedt B, et al. 2.
Optical networks for grid and cloud computing [15] A. Farrel, J.-P. Vasseur, J. Ash. A Path Computati-
applications[J]. Proceedings of the IEEE, 2012, on Element (PCE)-Based Architecture”, RFC4655,
100(5): 1149-1167. August 2006.
[6] Y. Lee et al. Framework for GMPLS and PCE [16] Jain S, Kumar A, Mandal S, et al. B4: Experien-
Control of Wavelength Switched Optical Net- ce with a globally-deployed software defined
works (WSON). RFC6163, 2011 WAN[C]. Proceedings of the ACM SIGCOMM
[7] K. Shiomoto et al. Requirements for GMPLS-Ba- 2013 conference on SIGCOMM. ACM, 2013:
sed Multi-Region and Multi-Layer Networks 3-14.
(MRN/MLN). RFC 5212, 2008
[8] Chamania M, Jukan A. A survey of inter-domain Biographies
peering and provisioning solutions for the next Qin Panke, is a lecturer in the School of Computer
generation optical networks[J]. Communicati- Science and Technology, Henan Polytechnic Univer-
ons Surveys & Tutorials, IEEE, 2009, 11(1): 33- sity. He received his Ph.D. degree from BUPT, P. R.
51. China, in 2015. His research interests include Next
[9] Giorgetti A, Fazel S, Paolucci F, et al. Path pro- Generation Optical Networks Architecture for Cloud
tection with hierarchical PCE in GMPLS-based Services, Routing and Wavelength Assignment Prob-
multi-domain WSONs[J]. Communications Let- lems in Optical Networks and so on. *The correspon-
ters, IEEE, 2013, 17(6): 1268-1271. dence author: E-mail: qinpanke@bupt.edu.cn
[10] Zhang J, Zhao Y, Chen X, et al. The first expe-
rimental demonstration of a DREAM-based Chen Xue, received her master degree from Beijing
large-scale optical transport network with 1000 University of Posts and Telecommunications (BUPT),
control plane nodes[J]. Optics express, 2011, P. R. China, in 1985. She is now a professor of the
19(26): B746-B755. Sate Key Laboratory of Information Photonics and
[11] Zhang J, Chen X, Ji Y, et al. Experimental de- Optical Communications, BUPT. Her main research
monstration of a DREAM-based optical trans- interests focus on optical access networks and back-
port network with 1000 control plane nodes[C]. bone optical transmission.
European Conference and Exposition on Optical
Communications. Optical Society of America, Wang Lei, is a lecturer in the State Key Laboratory
2011: We. 10. P1. 84. of Information of Photonics and Optical Communi-
[12] Zhao Y, Zhang J, Zhang M, et al. DREAM: dual cations, Beijing University of Posts and Telecommu-
routing engine architecture in multilayer and nications (BUPT). He received his Ph.D. degree from
multidomain optical networks[J]. Communicati- BUPT, P. R. China, in 2009. His current research in-
ons Magazine, IEEE, 2013, 51(5): 118-127. terests are in software defined optical transport and
[13] Yang H, Zhao Y, Zhang J, et al. Cross stratum access networks
optimization of application and network re-
source based on global load balancing strategy Wang Liqian, is a lecturer in the State Key Labora-
in dynamic optical networks[C]. Optical Fiber tory of Information of Photonics and Optical Com-
Communication Conference. Optical Society of munications, Beijing University of Posts and Telecom-
America, 2012: JTh2A. 38. munications (BUPT). She received her Ph.D. degree
[14] Liu L, Zhang D, Tsuritani T, et al. First field trial from BUPT, P. R. China, in 2009. Her current research
of an OpenFlow-based unified control plane interests are in broadband access and network inte-
for multi-layer multi-granularity optical net- gration.
works[C]. Optical Fiber Communication Confe-

127 China Communications • October 2015

You might also like