4G to 5G

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 104

DOCTORAL THESIS IN

INFORMATION AND COMMUNICATION TECHNOLOGY


STOCKHOLM, SWEDEN 2019

Ultra-low latency communication


for 5G transport networks

Jun Li

KTH ROYAL INSTISTUTE OF TECHNOLOGY

SCHOOL OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCE


Ultra-low latency communication for
5G transport networks

Jun Li

PhD Thesis in Information and Communication Technology

School of Electrical Engineering and Computer Science

KTH Royal Institute of Technology

Stockholm, Sweden, 2019


KTH School of Electrical Engineering and
Computer Science
TRITA-EECS-AVL-2019:59 SE-164 40 Kista
ISBN: 978-91-7873-243-2 SWEDEN

Akademisk avhandling som med tillstånd av Kungl Tekniska högskolan framlägges till offentlig granskning för avläggande av
doktorsexamen i Informations- och kommunikationsteknik fredagen den 20 September 2019 klockan 10.00 i Ka-Sal 308, Electrum,
Kungl Tekniska högskolan, Kistagången 16, Kista.

© Jun Li, September 2019

Tryck: Universitetsservice US AB
Abstract
The fifth generation (5G) mobile communication system is envisioned to serve various mission-
critical Internet of Thing (IoT) applications such as industrial automation, cloud robotics and safety-
critical vehicular communications. The requirement of the end-to-end latency for these services is
typically within the range between 0.1 ms and 20 ms, which is extremely challenging for the
conventional cellular network with centralized processing. As an integral part of the cellular
network, the transport network, referred to as the segment in charge of the backhaul of radio base
stations or/and the fronthaul of remote radio unit, plays an especially important role to meet such a
stringent requirement on latency.
This thesis investigates how to support the ultra-low latency communications for 5G transport
networks, especially for backhaul networks. First, a novel passive optical network (PON) based
mobile backhaul is proposed and tailored communication protocols are designed to enhance the
connectivity among adjacent base stations (BSs). Simulation results show that an extremely low
latency (less than 1 ms packet delay) for communications among the BSs can be achieved, which
thereby can be used to support fast handover for users with high mobility (e.g., vehicles).
Furthermore, the thesis presents a fog computing enabled cellular networks (FeCN), in which
computing, storage, and network functions are provisioned closer to end users, thus the latency on
transport networks can be reduced significantly. In the context of FeCN, the high mobility feature of
users brings critical challenges to maintain the service continuity with stringent service
requirements. In the meanwhile, transmitting the associated services from the current fog server to
the target one to fulfill the service continuity, referred to as service migration, has been regarded as
a promising solution for mobility management. However, service migration cannot be completed
immediately, and may lead to a situation where users experience loss of access to the service. To
solve such issues, a quality-of-service (QoS) aware service migration strategy is proposed. The
method is based on the existing handover procedures with newly introduced distributed fog
computing resource management scheme to minimize the potential negative effects induced by
service migration. The performance of the proposed schemes is evaluated by a case study, where
realistic vehicle mobility pattern in the metropolitan network scenario of Luxembourg is used to
reflect the real world environment. Results show that low end-to-end latency (e.g., 10 ms) for
vehicular communication can be achieved with typical vehicle mobility. During service migration,
both the traffic generated by migration and other traffic (e.g., control information, video) are
transmitted via mobile backhaul networks. To balance the performance of the two kinds of traffic, a
delay-aware bandwidth slicing scheme is proposed in PON-based mobile backhaul networks.
Simulation results show that, with the proposed method, migration data can be transmitted
successfully within a required time threshold, while the latency and jitter for non-migration traffic
with different priorities can be reduced significantly.

Keywords
Ultra-low latency communication, fog computing, passive optical network, 5G transport network,
service migration, bandwidth slicing, distributed fog resource management.

i
Sammanfattning
5:e generationens (5G) mobilnät förväntas stödja olika kritiska tillämpningar av Sakernas Internet
(IoT), såsom industriell automation, moln-baserad robotik och säkerhetskritisk
fordonskommunikation. Kravet på envägs totalfördröjning för dessa tjänster ligger typiskt i
intervallet mellan 0,1 ms och 20 ms, vilket är extremt utmanande för det konventionella mobilnätet
med centraliserad databearbetning. Transportnätet, det segment av nätet som är ansvarigt för
sammankoppling av radiobasstationer (s.k. backhaul) och/eller anslutning av radioenheter (s.k.
fronthaul), är en integrerad del av mobilnätet och spelar en särskilt viktig roll för att möta ett sådana
stringenta krav på fördröjningen.

Denna avhandling undersöker hur man kan stödja de kommunikation med ultralåga fördröjning för
5G-transportnät, speciellt för backhaul-nätverk. Först beaktas ett nytt passivt optiskt nätverk (PON)
förmobil backhaul, och skräddarsydda kommunikationsprotokoll utformas för att förbättra
uppkopplingen mellan alla angränsande basstationer (BS). Simuleringsresultat visar att en extremt
låg fördröjning (mindre än 1 ms paketfördröjning) för kommunikation mellan basstationerna kan
uppnås, vilket kan användas för att stödja snabb överlämning mellan basstationer (s.k. handover) för
mobila användare med hög rörlighet (t.ex. fordon).

Vidare presenterar avhandlingen ett dim-baserat (fog computing) mobilnät (FeCN), där
databehandling, lagring och nätverksfunktioner tillhandahålls närmare slutanvändarna, vilket
innebär att fördröjningen på transportnät kan minskas betydligt. I samband med FeCN är hantering
av mobila användare en av de mest kritiska utmaningarna för användare som kännetecknas av hög
rörlighet. Utmaningen är att upprätthålla tjänsten kontinuerligt och att uppfylla de stringenta
tjänstekraven. Tjänstemigrering, dvs flytt av tjänster från en server till en annan på ett dynamiskt
sätt, har betraktats som en lovande lösning för hantering av mobila användare. Tjänstemigrering kan
dock inte slutföras omedelbart, vilket kan leda till en situation där användare upplever att de förlorar
åtkomst till tjänsterna. För att lösa dessa frågor är föreslås en migreringsstrategi som beaktar
tjänstekvaliteten (QoS). Metoden bygger på befintliga handover-procedurer med nyntroducerade
resurshanteringssystem baserade på distribuerad fog computing, för att minimera de eventuella
negativa effekter som induceras av tjänstemigrering. En fallstudie, baserad på ett realistiskt
mobilitetsmönster för fordon i ett Luxemburg-scenario, genomförs med hjälp av simuleringsstudier
för att utvärdera prestanda för de föreslagna systemen. Resultaten visar att låg fördröjning (t.ex., 10
ms) för fordonskommunikation kan uppnås med typisk fordonsmobilitet. Under tjänstemigrering
skickas både trafiken genererad av migreringen och annan datatrafik (t.ex., kontrollinformation och
video) via mobila backhaul-nätverk. För att balansera prestandan för de två typerna av trafik,
föreslås ett system för bandreddsuppdelning i PON-baserade mobila backhaul-nätverk som tar
fördröjning i beaktan. Simuleringsresultat visar att med den föreslagna metoden kan migreringsdata
framgångsrikt överföras inom den tidsgräns som krävs, medan fördröjningen och
fördröjningsvariationer v övrig trafik med olika prioriteringar kan minskas betydligt.

Nyckelord

Ultra-låga fördröjning kommunikation, dim-beräkning, passivt optiskt nätverk, 5G transportnätverk,


tjänstmigrering, bandbreddsuppdelning, distribuerad dim-baserad resurshantering.

iii
Acknowledgment
First and foremost, I would like to express my sincere gratitude to my supervisor, Professor
Jiajia Chen for her guidance, motivation and support during my PhD journey. I would also
like to express appreciation to my co-supervisor, Professor Lena Wosinska for her
continuous guidance and support to my Ph.D study.
I am really grateful to Associate Professor Markus Hidell for the advance review of my
PhD thesis and Dr. Björn Skubic for reviewing my PhD proposal and providing insightful
comments. I would also like to offer my thanks to Professor Vincent Chan for accepting the
role of opponent and Professor Giuseppe Durisi, Professor Dan Kilper, Dr. Gemma Vall
Llosera for accepting the role of grading committee member.
I would like to thank our research group members, Dr. Lei Chen, Xiaoman Shen,
Jiannan Ou for the fruitful discussions, contributions to the co-authored papers, as well as
their help for my PhD thesis. I would also like to thank Dr. Dung Pham Van, Dr. Carlos
Natalino, Dr. Rui Lin, Yuan Cao, for their contributions to our co-authored papers.
I am thankful to all the members in the Optical Networks Lab for providing a friendly
work environment as well as their kind help during my PhD study. It is a pleasant journey
studying at the ONLab with them.
I would like to thank Professor Mung Chiang for his hosting of my visiting study at
Edge Lab at Princeton University. Also, I am grateful to other colleagues at Edge Lab for
their inspiration, discussion on the research works.
I am also grateful to all my friends in China and Sweden for their encouragement and
support, which makes my life more wonderful.
Last but not least, I would like to express my heartfelt appreciation to my parents and
sister for their everlasting love and unequivocal support during my whole life.

Jun Li
Stockholm, September 2019

v
List of Acronyms

3GPP 3rd generation partnership project


4G 4th generation of mobile network
5G 5th generation of mobile network
AWG Arrayed waveguide grating
BBU Baseband unit
BS Base station
CO Central office
CU Central unit
CCM Central control module
CDF Cumulative distribution function
CPRI Common public radio interface
CoMP Cooperative multipoint
DBA Dynamic bandwidth allocation
DBS Dynamic bandwidth slicing
DCM Distributed control module
DES Discrete event-driven simulation
DU Distributed unit
eCPRI Enhanced common public radio interface
EPC Evolved packet core
E2E End to end
EPON Ethernet passive optical network
FeCN Fog computing enabled cellular network
FCFS First come first served
FRR Fog resource reservation
FRL Fog resource reallocation
IoT Internet of thing
LP Low priority
MAC Media access control
MEC Mobile edge computing
MPCP Multipoint control protocol
MSP Migration successful probability
NGPI Next generation fronthaul interface
NG-PON Next generation passive optical network
OTN Optical transport network
OLT Optical line terminal
ONU Optical network unit

vii
ODN Optical distribution network
PON Passive optical network
QoS Quality of service
RAN Radio access network
RRH Remote radio head
RRU Remote radio unit
RRT Round-trip time
TGM Traffic generation module
TDM-PON Time division multiplexed passive optical network
Time and wavelength division multiplexed passive
TWDM-PON
optical network
UE User equipment
VGP Virtual communication group
VM Virtual machine
Wavelength division multiplexed passive optical
WDM-PON
network
SUMO Simulation of urban mobility
SMSM Service migration strategy module

viii
List of appended papers
Paper A
J. Li, J. Chen, “Optical transport network architecture enabling ultra-low latency for
communications among base stations” IEEE/OSA Optical Fiber Communication
Conference (OFC), March 19-23, 2017.

Paper B
J. Li, J. Chen, “Passive optical network based mobile backhaul enabling ultra-Low latency
for communications among base stations” IEEE/OSA Journal of Optical Communication
and Networks, vol. 9, issue 10, pp. 855-863, Oct. 2017.

Paper C
J. Li, X. Shen, L. Chen, D. Van, J. Ou, L. Wosinska, and J. Chen, “Service migration
schemes in fog computing enabled cellular network to support real-time vehicular
services,” IEEE Access, vol. 7, pp. 13704-13714, Jan., 2019.

Paper D
J. Li, C. Natalino, D. Van, L. Wosinska, and J. Chen, “Resource management in fog
enhanced radio access network to support real-time vehicular services,” IEEE Information
Conference on Fog and Edge computing(ICFEC), Madrid, May, 2017.

Paper E
J. Li, C. Natalino, X. Shen, L. Chen, J. Ou, L. Wosinska, and J. Chen, “Online resource
management in fog enhanced cellular network for real-time vehicular services,” Applied
Sciences. (Manuscript)

Paper F
J. Li, L. Wosinska, and J, Chen, “Dynamic bandwidth slicing for service migration in
passive optical network based mobile backhaul” IEEE/OSA Asia Communication and
Photonics Conference (ACP), Nov. 2017. Best student paper award

Paper G
J. Li, X. Shen, L. Chen, J. Ou, L. Wosinska, and J. Chen, “Delay-aware network slicing for
service migration in mobile backhaul networks” IEEE/OSA Journal of Optical
Communication and Networks, vol. 11, issue 4, pp. B1-B9, Apr., 2019.

ix
List of not appended papers
Paper I
Y. Cao, Y. L. Zhao, J. Li , R. Lin, J. Zhang, J. Chen, “Multi-Tenant Provisioning for
Quantum Key Distribution Networks with Heuristics and Reinforcement Learning: A
Comparative Study, ”IEEE Transactions on Network and Service Management, 2019.
(Submitted)

Paper II
X. Shen, J. Li, L. Chen, S. He, and J. Chen, “Heterogeneous DSRC and Cellular Networks
with QoS-aware interface selection to Support Real-time Vehicular Communications”
2019. (Submitted)

Paper III
Y. Cao, Y. L. Zhao, J. Li , R. Lin, J. Zhang, J. Chen, “Online Multi-Tenant Secret-Key
Assignment with Reinforcement Learning over Quantum Key Distribution Networks”
IEEE/OSA Optical Fiber Communication Conference (OFC), March, 2019.

Paper IV
J. Li, and J. Chen “Service Migration Strategies for Ultra-low Latency Services in Edge
Computing” IEEE/OSA Asia Communication and Photonics Conference (ACP), Nov.,
2018. Invited

Paper V
X. Shen, J. Li, L. Chen, J. Chen, and S. He, “Heterogeneous LTE/DSRC Networks to
Support Real-time Vehicular Communications,” IEEE International Conference on
Advanced Information Technology (ICAIT), 2018. Best paper award.

Paper VI
J. Ou, J. Li, L. Yi, and J., Chen, “Resource allocation in passive optical network based
mobile backhaul for user mobility and fog computing” IEEE/OSA Asia Communication and
Photonics Conference (ACP), Nov. 2017. Invited

Paper VII
J. Chen, and J. Li, “Efficient mobile backhaul architecture offering ultra-short latency for
handovers” IEEE International Conference on Transparent Optical Networks (ICTON),
Jun., 2016. Invited

xi
Paper VIII
W. Sun, H. Yang, J. Li and W. Hu, “Dynamic wavelength sharing in time and wavelength
division multiplexed PONs” IEEE International Conference on Transparent Optical
Networks (ICTON), Jun., 2015. Invited

Paper IX
J. Li, W. Sun, H. Yang and W. Hu, “Adaptive Registration in TWDM-PON with ONU
migrations” IEEE/OSA Journal of Optical Communication and Networks, vol. 6, issue 11,
pp. 943-951, 2014

Paper X
H. Yang, W. Sun, J. Li and W. Hu, “Energy Efficient TWDM Multi-PON System with
Wavelength Relocation,” IEEE/OSA Journal of Optical Communication and Networks,
vol.6, no.6, 2014.

Paper XI
J. Li, H. Yang, W. Sun and W. Hu, “Registration Efficiency in TWDM-PON systems with
User Migration ” IEEE/OSA Asia Communication and Photonics Conference (ACP), Nov.,
2013

Paper XII
H. Yang, W. Sun, W. Hu, and J. Li, “ONU migration in dynamic Time and Wavelength
Division Multiplexed Passive Optical Network (TWDM-PON),” OSA Optical Express. vol.
21, no.18, pp. 21491–21499, 2013.

Paper XIII
H. Yang, W. Sun, J. Li and W. Hu, “User Migration in Time and Wavelength Division
Multiplexed PON (TWDM-PON),” IEEE International Conference on Transparent Optical
Networks (ICTON), June, 2013. Invited

xii
Contents

Abstract .................................................................................................................... i
Sammanfattning...................................................................................................... iii
Acknowledgment...................................................................................................... v
List of Acronyms .................................................................................................... vii
List of appended papers .......................................................................................... ix
List of not appended papers..................................................................................... xi
1 Introduction .......................................................................................................... 1
1.1 Background ................................................................................................................................1
1.1.1 5G Transport networks ........................................................................................................3
1.1.2 Fog computing enabled cellular networks ..........................................................................5
1.2 Problem statement ......................................................................................................................6
1.3 Thesis contributions ...................................................................................................................7
1.4 Ethics and sustainable development...........................................................................................9
1.5 Thesis organization ..................................................................................................................10
2 Evaluation methodology ...................................................................................... 11
2.1 Use case of connected vehicles ................................................................................................11
2.2 Definition of metrics ................................................................................................................12
2.3 Simulators ................................................................................................................................13
3 Low-latency PON based mobile backhaul architecture ......................................... 17
3.1 Background and motivation .....................................................................................................17
3.1.1 PON.................................................................................................................................. 17
3.1.2 PON based mobile backhaul network .............................................................................. 18
3.2 Physical layer architecture design ............................................................................................20
3.3 Communication protocol..........................................................................................................22
3.4 Bandwidth allocation algorithm ...............................................................................................23
3.5 Performance evaluation............................................................................................................23
3.6 Summary ..................................................................................................................................27
4 Service migration strategy in FeCN ..................................................................... 29
4.1 Background and motivation .....................................................................................................29
4.2 QoS aware service migration strategy......................................................................................32

xiii
4.3 Performance evaluation ........................................................................................................... 33
4.4 Summary.................................................................................................................................. 37
5 Distributed fog computing resource management in FeCN ................................... 39
5.1 Background and motivation..................................................................................................... 39
5.2 BS-Fog architecture ................................................................................................................. 40
5.3 Distributed computing resource management ......................................................................... 41
5.3.1 Low-priority service selection algorithm.......................................................................... 42
5.3.2 Neighboring Fog-BS selection algorithm ......................................................................... 43
5.4 Performance evaluation ........................................................................................................... 43
5.5 Summary.................................................................................................................................. 47
6 Bandwidth slicing in mobile backhaul networks ................................................... 49
6.1 Background and motivation..................................................................................................... 49
6.2 Bandwidth slicing mechanism ................................................................................................. 51
6.3 Delay-aware resource allocation algorithm ............................................................................. 52
6.4 Performance evaluation ........................................................................................................... 55
6.5 Summary.................................................................................................................................. 60
7 Conclusions and future work ............................................................................... 61
7.1 Conclusions ............................................................................................................................. 61
7.2 Future work ............................................................................................................................. 62
Summary of the original works ............................................................................... 65
Biography .............................................................................................................. 67

xiv
Chapter 1

Introduction
Latency is highly critical for mission-critical services in the fifth generation (5G) cellular
networks. This thesis focuses on designing 5G transport networks with ultra-low latency
communications to support these time-critical services. In this chapter, the research
background and research problems are first briefly introduced. Then, the contributions of
this thesis are highlighted.

1.1 Background
The fifth-generation (5G) cellular network is envisioned to support a variety of emerging
mission-critical services such as industrial automation, cloud robotics and safety-critical
vehicular communications [1, 2]. These mission-critical services usually have stringent
requirements on latency, jitter and reliability. In general, the required end-to-end latency is
in order of millisecond, while the probability that this requirement is met is expected to be
as high as 99.999%. For example, the communication latency between sensors and control
nodes for industrial automation has to be lower than 0.5 milliseconds, while that for virtual
and augmented reality has to be lower than 5 milliseconds, as shown in Fig. 1.1 [3]. To
support these mission-critical services, ultra-reliable low-latency communication (URLLC)
is considered as one of the main enabling technologies and will play a key role in the 5G
paradigm.
In the existing cellular network (e.g., Long Term Evolution, LTE), the communication
latency is contributed by radio access network (RAN), core network (CN), and the transport
network between RAN and CN (Fig. 1.2), where the transport network bears a significant
amount of the latency. Meanwhile, it becomes an important trend to move computing,
control, data storage, and network functions (e.g., Evolved Packet Core, EPC) into the
centralized data centers, in order to make full use of resources and reduce energy
consumption [4]. However, with the centralized processing it is a big challenge to realize

1
Chapter 1 Introduction

Figure1.1: 5G use case requiring low latency [3].

ultra-low latency for time-critical applications. For example, lots of time-critical vehicular
applications (e.g., remote driving, real-time navigation) have been developed to improve
road traffic efficiency and safety [5, 6]. Most of these vehicular applications are
computation-sensitive, which require to be deployed at remote data centers [7, 8]. In such
cases, the packets generated by vehicles need to be transmitted to the EPC before reaching
these applications (see blue line in Fig. 1.2). In addition, vehicles characterized by high
mobility usually need to traverse across the coverage of multiple BSs during the whole
journey. To maintain the communication connection continuity, the handover technique has
been used in the existing cellular network, which is referred to as the procedure that user
equipment (UE) changes its associated BSs [9]. In such a procedure, the UE needs to
release the current BS resource and reconnects to the target BS, during which the ongoing
services may be interrupted. For a rapid handover, fast information exchange between the
source BS and the target BS is necessary. These exchanged information need to be
transmitted to the EPC first, even if the two BSs are located closely (see red line in Fig.
1.2), which introduces handover delay. For example, the wide area networks typically may
introduce more than tens of milliseconds delay, which can hardly satisfy the latency
requirement for safety-related services (e.g., less than 10 ms for remote driving). In addition
to the latency introduced by other components such as RAN, it is evident that the existing
cellular network cannot satisfy the stringent latency requirements of these mission-critical
services. Therefore, the existing cellular network including RAN, core network, and
transport network needs to be enhanced or even to be redesigned, in order to achieve ultra-
low latency communication.

2
Chapter 1 Introduction

Figure 1.2: Illustration of end-to-end latency in 4G network [2].

1.1.1 5G Transport networks


Transport network is referred to as the segment which is in charge of the backhaul of radio
base stations or/and the fronthaul of remote radio unit (RRU), which is an important
component of cellular networks [10]. In particular, the fronthaul is referred to as the
transport network that connects the remote radio head (RRH) and baseband unit (BBU) in
the fourth generation (4G) communication system. Common public radio interface (CPRI)
is a traditional fronthaul interface, which brings stringent requirements for transport
network in terms of capacity and latency. To address the issues caused by CPRI, the next
generation fronthaul interface (NGFI) is designed for 5G. To realize NGFI, a two-level
transport architecture is designed, which consists of two segments of transport networks
between the central unit (CU) and distributed unit (DU), and between DU and RRU, as
shown in Fig. 1.3 [11]. Such a CU-DU-based 5G BS is referred to as a gigabit NodeB
(gNB). The transport network segment between DU and RRU is regarded as fronthaul
network, in which enhanced common public radio interface (eCPRI) is considered to relax
the requirements induced by CPRI enabling a flexible base station split. The transport
network that connects CU and DU is regarded as midhaul network, in which various
baseband function split options are supported. Here, the statistical multiplexing can be used
to reduce the required capacity for transport networks. Backhaul network is referred to as
the part of transport network that connects the central unit to the core network.
In NGFI, baseband processing functions are expected to be allocated between RRU and
CU flexibly to satisfy the requirements of different scenarios. Such a function splitting
mechanism has a significant impact on the NGFI requirements in terms of data rate,
latency, and synchronization, which also brings new challenges for fronthaul and midhaul
networks [11]. For example, the required round-trip latency for the splitting Option 8
(referred to as the splitting between physical layer and radio frequency) has to be less than
250 μs, while that for the splitting Option 3 (referred to as the splitting between high radio
link control and low radio link control) is between 1.5 ms and 10 ms [12, 13]. Meanwhile,

3
Chapter 1 Introduction

Figure 1.3: 5G transport network [11, 12].

emerging 5G services (e.g., tactile internet, industrial automation) also bring stringent
latency requirements for backhaul networks. In the 3rd generation partnership project
(3GPP) standards, two communication interfaces are defined at BS in mobile backhaul
networks, namely S1 and X2 [14]. Each BS can communicate with the central aggregation
switch at mobile core network through S1, while direct information exchange between BSs
is enabled by X2. To satisfy the stringent latency requirements, on the one hand, the latency
for S1 traffic (referred to as the traffic exchanged through S1) needs to be reduced. On the
other hand, frequent handovers may occur in order to transfer on-going calls or data
sessions from one cell to its neighbor cell due to user mobility. Such handovers have to be
handled immediately to ensure low latency. Besides, to improve the cell throughput, the
technique of cooperative multi-point (CoMP) allowing multiple transmission from multiple
cells is proposed, which also has a stringent latency requirement for the communication
among BSs [15]. In this regard, X2 traffic (referred to as the traffic exchanged through X2),
e.g., control information and transferred data generated by handover, also needs to be
transmitted as soon as possible.
There are multiple access technologies tailored for 5G transport network, such as point-
to-point fiber access, passive optical network (PON), flexible Ethernet, and optical
transport network (OTN). Among these technologies, PON is regarded as an excellent
candidate due to the point-to-multipoint topology for efficient use of fiber resources and the
wide deployments [16, 17]. Meanwhile, advanced PON technologies need to be developed
to satisfy the requirements of RAN and real-time services. For example, in the time division
multiplexed PON (TDM-PON) based fronthaul network, low-latency dynamic bandwidth
allocation (DBA) algorithms need to be designed in order to reduce the delay induced by
the conventional DBA mechanisms. In PON-based mobile backhaul networks, X2 traffic
needs additional pass through the central unit and even mobile core networks, which may
lead to high latency [18]. Thus, the latency in PON needs to be handled carefully to satisfy
the latency requirements of time-critical services.

Connection Connection between BS


between BS's and core network

4
Chapter 1 Introduction

Figure 1.4: Illustration of end-to-end latency in fog computing enabled cellular network.

1.1.2 Fog computing enabled cellular networks


The latency in transport networks can also be reduced by moving the computing, storage,
control, and network functions to the edge of network instead of performing all the
functions in remote data centers. Fog computing is a new paradigm that delivers computing,
storage, control and network functions closer to end users, which can be integrated with the
existing cellular networks (e.g., aggression points, base stations) to provide ultra-low
latency communication for time-critical services [4]. As shown in Fig. 1.4, end users can
access the applications (e.g., remote driving) hosted in fog nodes with low transport
latency. In addition, by moving EPC closer to BSs, the communication latency among BSs
can also be reduced significantly. Thus, low latency for X2 traffic can be achieved, and fast
handover can be well supported for time-critical services.
Even though fog node integrated with cellular networks can provide computing and
storage capability closer to end users to support time-critical services, it also has several
limitations. Specifically, the contradiction between the limited coverage of single fog node
and the mobility of end users may result in performance degradation in terms of latency.
Regarding this, service migration has been proposed and regarded as a promising solution
[19]. Here, service migration is referred to as relocating services from one fog node to
another. As shown in Fig. 1.5, once a single fog node is overloaded, service migration is
triggered to offload this fog node to other fog nodes or edge servers. Besides, when users
move from the area covered by one fog node to another, time-critical services are required
to be migrated accordingly in order to follow the users’ movement and maintain the service
continuity with satisfying stringent latency requirements for these services. However,
service migration may take time, which may result in service interruption. Thus, service
migration is required to be handled with algorithms that take care of the service
requirements and the available network resources.

5
Chapter 1 Introduction

Figure 1.5: Service migration in fog computing enabled cellular networks.

1.2 Problem statement


This thesis focuses on the ultra-low latency communication for transport networks,
especially for mobile backhaul networks. To achieve such ultra-low latency
communication, two main research areas are investigated, as introduced in the following
sections with concrete research questions.
Research area one: Design and investigation of PON-based mobile backhaul networks to
support ultra-low latency communication among adjacent BSs.
In the conventional PON-based mobile backhaul networks, BS and optical network unit
(ONU) are co-located. To realize inter-BS communications, the X2 traffic has to go through
the optical line terminal (OLT) extra and even the edge nodes at mobile core networks.
Thus, the latency can be quite high though if the BSs are close to each other geographically.
To address this issue, the following questions have been considered for the design and
investigation of a novel PON-based mobile backhaul architecture to realize ultra-low
latency communication among BSs.
• First, how to design backhaul architecture, in which the splitters belonging to
different PONs can be interconnected?
• Second, how to design communication protocol tailored for the proposed
architecture to avoid signal conflicts?
• Finally, how to schedule bandwidth resources to realize fast inter-BS
communications?

6
Chapter 1 Introduction

Research area two: Efficient and smooth service migration in fog computing enabled
cellular networks.
Service migration is an effective solution to maintain service continuity for users
characterized by high mobility in fog computing enabled cellular networks (FeCN).
However, research challenges for an efficient service migration scheme that considers the
service requirements, the capability of fog nodes, as well as the differentiated traffic. The
following research questions have been addressed to solve some of the challenges.
• First, how to design effective service migration strategies to minimize the negative
effects induced by such migrations while maintaining service continuity and
stringent latency requirements?
• Second, how to allocate sufficient resources to reduce the service migration
blocking caused by the limit of computing resources (e.g., CPU, memory) at the fog
nodes?
• Finally, in a PON-based mobile backhaul network, where bandwidth is shared by
both the traffic generated by service migration and other normal traffic, how the
developed service migration strategy deals with differentiated traffic with
consideration of their different quality of service requirements?

1.3 Thesis contributions


The contributions of this thesis are categorized into two parts associating with the two
above-mentioned research areas. The first part concludes the works in Paper A and Paper
B. In this part, a novel PON-based mobile backhaul architecture is proposed to enable ultra-
low latency communication among neighboring BSs. The second part concludes the works
in Paper C, Paper D, Paper E, Paper F and Paper G. In this part, fog computing enabled
cellular networks is firstly presented, in which computation capabilities are provisioned
closer to the end users to reduce latency on transport networks. In such a context, a quality
of service (QoS) aware service migration strategy is proposed to maintain the service
continuity while reducing the migration cost. Following this, to guarantee sufficient
resource for a target fog node to accept the migrated services, distributed fog resource
management schemes are designed and presented. Furthermore, to balance the performance
of the migration traffic and non-migration traffic, a novel bandwidth slicing algorithm in
PON-based backhaul networks is proposed. Finally, to evaluate the performance of the
proposed strategy and algorithms, simulators have been developed to integrate realistic
mobility pattern. The main works are summarized as follows.
Low-latency PON-based mobile backhaul architecture
In Paper A and Paper B, a novel transport network architecture for mobile backhaul
networks is presented. The purpose of designing the mobile backhaul architecture is to
enhance the connectivity among neighboring BSs, especially for those BSs belonging to
different PONs. In the proposed architecture, the splitters belonging to different PONs are
connected by adding fibers. Therefore, BSs can communicate with each other directly

7
Chapter 1 Introduction

without via central office. However, such enhanced connectivity among BSs brings risks of
traffic conflicts, which goes beyond the capability of the existing medium access control
(MAC) protocol tailored for intra-PON communications. Regarding this, a novel MAC
protocol and dynamic bandwidth allocation algorithm tailored for the proposed architecture
are developed to realize fast inter-BS communications. The performance of the proposed
scheme is investigated by discrete event simulation. The results show that, with the
proposed PON-based mobile backhaul network, an extremely low latency (less than 1 ms
packet delay) for communications among adjacent BSs can be achieved regardless of the
traffic load and feeder fiber length, demonstrating a great potential to support future time-
critical applications, e.g., vehicular services.
QoS aware service migration strategy in FeCN
In Paper C, a QoS aware service migration scheme based on existing handover procedures
is proposed to support real-time vehicular services. In the context of FeCN for vehicular
communications, mobility is one of the main challenges for vehicles due to the limited
coverage of fog nodes co-located with BSs. On the one hand, the continuity of vehicular
communications needs to be maintained by the handover procedure. On the other hand, the
ongoing vehicular services need also to be handled carefully from the QoS perspective. In
this paper, we propose a QoS aware service migration scheme which follows the existing
handover procedure with addition of decision mechanisms according to the satisfaction of
QoS requirements of these ongoing real-time vehicular services. The main contribution of
the proposed scheme is that the service migration protocol is designed to enhance the
existing handover protocol while realizing information exchange, with which the negative
effects caused by migration can be thus reduced. In order to investigate the performance of
the proposed scheme, a simulator is developed to analyze the end-to-end latency profile,
reliability and migration cost in a case study where realistic vehicle mobility pattern for
Luxemburg is considered. Results indicate that the end-to-end latency for vehicular services
can be well satisfied, while the migration overheads can also be reduced significantly
compared with two benchmarks.
Distributed fog computing resource management in FeCN
In Paper D, two distributed fog computing resource management schemes are presented to
preferentially provision computing resources to real-time vehicular services. The first is a
resource reservation scheme where certain amount of computing resource is reserved for
real-time vehicular services. While the second is a dynamic resource allocation scheme
where the resources used for other services (named low-priority services) can be released
and reallocated for real-time vehicular services when the traffic load is high. The key idea
of both schemes is to prioritize computing resources for real-time vehicular services. With
such prioritization, it is expected that the performance of low-priority services gets
degraded, especially when the traffic load is high in fog nodes. In this regard, a third
distributed fog resource management scheme is proposed to solve the above issue in Paper
E. In such a scheme, real-time vehicular services are still preferentially allocated with

8
Chapter 1 Introduction

computing resources. Meanwhile, low-priority services can be migrated from the fog nodes
with high load to the ones with low load. In order to minimize the negative effects and
communication overheads caused by migrations of low-priority services, a service selection
algorithm and a delay-aware neighboring fog nodes selection algorithm are proposed. We
investigate the performance of the proposed scheme by using the developed simulator with
use of the traffic model for the country of Luxembourg. Simulation results indicate that the
performance of both real-time vehicular services and low-priority services can be improved
compared with the two benchmarks.
Bandwidth slicing in mobile backhaul networks
In Paper F and Paper G, a delay-aware bandwidth slicing mechanism for service
migration in PON-based mobile backhaul networks is presented. In the proposed scheme,
backhaul bandwidth is divided into two slices for migration traffic and non-migration
traffic dynamically and effectively according to their different delay requirements. The key
idea of the proposed scheme is to cut large-size migration data into small pieces and
transmit them in multiple polling cycles under the requirement of delay threshold.
Meanwhile, for non-migration traffic, the bandwidths are provisioned according to their
different QoS requirements. In order to avoid that bandwidths are monopolized by
migration traffic, a threshold of the allocated bandwidth for migration traffic in each polling
cycle is set to minimize the negative effects on non-migration traffic. The performance of
the migration traffic and non-migration traffic can be balanced by adjusting the threshold.
Simulation results show that the migration data can be transmitted successfully within a
required time threshold, while the latency and jitter for non-migration traffic with different
priorities can be significantly reduced in comparison to that of the benchmarks.

1.4 Ethics and sustainable development


This thesis focuses on the ultra-low latency communication for transport networks, which
can enable the emerging mission-critical applications. These applications can be used to
make the world more sustainable. For example, the designed ultra-low latency backhaul
networks and proposed service migration scheme can well support real-time vehicular
applications (e.g., crash avoiding, remote driving), and thus improving road efficiency and
safety. In this regard, the societal and economic sustainability can significantly benefit from
our research. Furthermore, in our proposed solutions, the technique of passive optical
network is adopted due to its energy efficiency. In such a case, the environmental
sustainability can be improved by reducing power consumption.
On the other hand, our research works may potentially introduce ethics aspects that need
to be considered. In our proposed PON based mobile backhaul architecture, additional
fibers need to be deployed among the splitters belonging to different PONs to enhance the
connectivity among adjacent BSs. However, the deployment of additional fibers should not
have any negative effects on the environment. In addition, our proposed service migration
scheme can keep low communication latency, but it may also lead to a short-time service

9
Chapter 1 Introduction

interruption during migration procedure, which potentially affects the performance of real-
time vehicular applications and introduce safety risks.

1.5 Thesis organization


This thesis is organized as follows:
Chapter 2 presents the research methodologies used in this thesis. First, the key
performance indicators are introduced. Following that, three simulators based on discrete
event-driven simulation are described, which are developed to evaluate the performance of
the proposed schemes.
Chapter 3 introduces a PON-based mobile backhaul architecture as well as a tailored
communication protocol and bandwidth allocation algorithm to enhance the connectivity
among neighboring BSs. The performance of the proposed scheme is investigated with the
developed simulator. Simulation results show that, for both BSs that belong to the same
PON and those belonging to different PONs, ultra-low latency communication can be
achieved among them.
Chapter 4 introduces a fog computing enabled cellular networks, i.e., FeCN. In the
context of FeCN, a QoS aware service migration strategy based on the existing handover
procedure is proposed for real-time vehicular services. To investigate the performance of
the proposed migration strategy, a realistic mobility pattern in the country of Luxembourg
is used as a study case. The results indicate that the end-to-end latency can be maintained at
a low level, while the required migration frequency can be reduced significantly.
Chapter 5 presents a distributed fog computing resource management scheme, in which
resources are prioritized to real-time vehicular services. A service selection algorithm to
improve communication efficiency is proposed, together with a delay-aware neighboring
fog nodes selection algorithm to minimize the negative effects on low-priority services.
Chapter 6 presents a dynamic bandwidth slicing mechanism for service migration in
PON-based mobile backhaul networks. In such a bandwidth slicing mechanism, large-size
migration data is cut into small pieces and transmitted under a required delay threshold. The
mechanism also takes care of the non-migration traffic so that the negative effects induced
by the large-size migration traffic can be minimized.
Chapter 7 concludes the thesis and highlights the possible future works.

10
Chapter 2

Evaluation methodology
A variety of mission-critical services are expected to be supported in 5G networks. A
common feature of these services is that they have stringent latency requirements. Real-
time vehicular services are among them, for which mobility is an issue that may affect the
service continuity. In this thesis, the focus is on the design of transport networks for these
mission-critical services, especially for real-time vehicular services.
In this chapter, the QoS requirements of vehicular services are firstly introduced. Then,
the definitions of the metrics (e.g., end-to-end latency) that have been used in this thesis are
discussed. In order to investigate the performance of the proposed schemes, three tailored
simulators have been developed and are presented.

2.1 Use case of connected vehicles


Connectivity is becoming an important feature for vehicles to provide an information-rich
travel environment, thus improving road safety and traffic efficiency [20]. Meanwhile, it
also brings challenges on collecting, storing, and processing a large amount of data
generated by the vehicles. To effectively support the vehicular services, vehicular
communication systems should be enhanced by integrating cellular networks with cloud
computing technologies [8, 21]. These cloud-based vehicular services can be divided into
three categories according to their latency and reliability requirements, shown in Table 2.1.
For example, the vehicular services used for advanced driving (e.g., remote driving,
autonomous driving) are time-critical and have a stringent communication latency
requirement for cellular networks.

11
Chapter 2 Evaluation methodology

Table 2.1: Cloud based vehicular services. © 2019 IEEE (Paper C).

Applications latency Reliability Examples


Advanced Autonomous driving; remote
1ms ~20 ms >99.999%
driving driving
Road sign notification;
Efficient
<100ms 90%~99.999% automated parking;
driving
real-time navigation
Augmented reality game, local
Infotainment <100ms Not concern
advertisement

2.2 Definition of metrics


The main metrics evaluated in this thesis are defined as following.
End-to-end latency is the time from the moment when one packet is transmitted by the
source to the moment when it is successfully received at the destination [1].
Average end-to-end latency is the average value of the end-to-end latencies for all the
received packets.
Jitter is the variance of the end-to-end latencies for all the received packets.
Reliability is the probability that the end-to-end latency is under the maximum allowable
latency level [22]. Here, it is derived by the Cumulative Distribution Function (CDF) of the
end-to-end latency.
Migration delay is referred to as the time duration from the moment that a migration is
initiated until the affected service is successfully transferred to the target fog node, which
consists of the communication delay between the source and the target fog nodes and the
processing time at the fog nodes.
It should be noted that the end-to-end latencies used in different scenarios are further
explained, as shown in Table 2.2.

Table 2.2: Definitions of end-to-end latency in different chapters.

Chapter Definition of end-to-end latency

The total time experienced by a packet from the moment when it is sent from the
3
source BS to the moment when it is received by the target BS.

The total time experienced by a packet from the moment when it is sent from the
4 User Equipment (UE) to the moment when it is finally received by the fog node
that hosts the offered application.

The total time experienced by a packet from the moment when it is sent from the
6
ONU to the moment when it is accepted by the OLT.

12
Chapter 2 Evaluation methodology

2.3 Simulators
As communication system is usually a large-scale and complex system, it is difficult to
evaluate its performance by implementing with physical equipment, and simulation has
been one of the common research methodologies. In the research of communication system
design, discrete event-driven simulation (DES) is an effective methodology to model the
operation of a system as a discrete sequence of events. Each event occurs at a particular
instant in time and marks a change of state in the system. In this thesis, three DES-based
simulators are developed to evaluate the performance of the proposed schemes.
Simulator I: Low-latency mobile backhaul architecture
To realize ultra-low latency communication among BSs, a novel PON based mobile
backhaul architecture with tailored communication protocol is proposed, as introduced in
Chapter 3. Based on Ethernet PON (EPON) protocols, a simulator is developed by using
MATLAB.
As shown in Fig. 2.1, the simulator consists of a central control module (CCM) and
distributed control modules (DCMs), in which DCMs can communicate with each other
directly without via CCM according to the distributed bandwidth allocation algorithm
(BAA) implemented in DCMs.

Figure 2.1: Illustration of simulator for low-latency mobile backhaul architecture.

Simulator II: Service migration strategy and fog computing resource management
In order to evaluate the proposed service migration strategy and fog computing resource
allocation scheme, as introduced in Chapter 4 and Chapter 5, a simulator is developed
using Python. As shown in Fig. 2.2, this simulator consists of a central control module
(CCM) and several distributed control modules (DCMs). A DCM includes traffic
generation module (TGM) and fog computing resource management module (FRM). In
TGM, the vehicular traffic is generated by Simulation of Urban Mobility (SUMO) [23],

13
Chapter 2 Evaluation methodology

while the background traffic is generated by a module written with Python according to the
service traffic characteristics, e.g., Poisson traffic pattern. In FRM, the proposed fog
computing resource management algorithms are implemented. Each DCM works in a
distributed fashion and can communicate with other DCMs or the central control module
(CCM). The backhaul resource allocation algorithm and service migration strategies
implemented at CCM will be introduced in the next subsection.
SUMO is an open-source and microscopic road traffic simulator designed to handle
large road network [23]. By using this professional road traffic simulation tool, the
simulation environment of real-time vehicular traces in the country of Luxembourg can be
established. First, the road network is abstracted form a real map of Luxembourg that can
be downloaded from OpenStreetMap [23]. Then, the various vehicular traffic flows can be
generated according to real traffic data in Luxembourg. It is noted that the openly available
traffic pattern of Luxemburg has been used as an example in this thesis. The detailed
information of each vehicle (e.g., speed, location) is obtained from SUMO, which is used as
input for other modules.

Figure 2.2: Illustration of the simulator for service migration strategy and resource management.

Simulator III: Bandwidth slicing mechanism in mobile backhaul networks


To support service migration, PON based mobile backhaul network is applied in this thesis.
In such PON based mobile backhauls, a novel bandwidth slicing mechanism is proposed to
effectively transmit the migration traffic and non-migration traffic, as introduced in
Chapter 6. Based on the EPON protocols, a simulator is developed using MATLAB.
As shown in Fig. 2.3, the simulator consists of a central control module (CCM) and
several distributed control modules (DCMs). The proposed bandwidth slicing algorithm is
implemented at CCM, where the bandwidth is divided into two slices for migration traffic

14
Chapter 2 Evaluation methodology

and non-migration traffic, respectively. Furthermore, the bandwidth can be allocated to


different types of traffic according to bandwidth allocation algorithm (BAA) implemented
at each DCM. In the simulation, both migration traffic and non-migration traffic are
generated by the module hosted at DCMs according to service traffic characteristics, i.e.,
Poisson traffic pattern.

Figure 2.3: Illustration of the simulator for backhaul resource allocation.

15
Chapter 3

Low-latency PON based mobile backhaul


architecture
In this chapter, a novel passive optical network (PON) based mobile backhaul architecture
enabling the ultra-low communication latency among adjacent BSs is introduced, which
can be used to support fast handover for end users with high mobility.

3.1 Background and motivation


In this section, a brief introduction of PON and the conventional PON based mobile
backhaul networks are given, followed by discussion of the existing works.
3.1.1 PON
PON is a point-to-multipoint system, which typically consists of an optical line terminal
(OLT) at the central office (CO), optical distribution network (ODN) with passive power or
wavelength splitters, and optical network units (ONUs) at subscribers’ side. PON is widely
used for access networks due to its simplicity, high capacity and energy efficiency [24]. It
can be categorized into several subclasses according to the used multiple access techniques
and the end-to-end transmission, i.e., time division multiple PON (TDM-PON), wavelength
division multiple PON (WDM-PON), time and wavelength division multiple PON
(TWDM-PON) [25, 26].
• TDM-PON
TDM-PON utilizes TDM access techniques. In the upstream direction (from the ONU to
the OLT), each ONU burst is transmitted in an assigned time-slot and the signals sent from
all ONUs are multiplexed in the time domain. In the downstream (from the OLT to the
ONU), packets are broadcasted to all ONUs using one or more power splitters at remote

17
Chapter 3 Low-latency PON based mobile backhaul architecture

node. Here, the remote node can be passive wavelength splitters or wavelength
multiplexer/demultiplexer (e.g., array waveguide grating).
• WDM- PON
WDM-PON utilizes WDM access techniques, in which the wavelength selectivity can be
done by remote node. In WDM-PON, each ONU is assigned a specific wavelength, thus
this architecture can provide higher bandwidth per user and ONUs can work at the
individual data rate rather than the aggregated one.
• TWDM-PON
TWDM-PON is stacked by multiple PONs and has combined TDM and WDM
capabilities. In such TWDM-PON, multiple wavelengths are multiplexed together to share
a single feeder fiber, and each wavelength can be shared by multiple users through TDM.
TWDM-PON has been regarded as the most potential candidate for next generation passive
optical network stage 2 (NG-PON 2) [26].
3.1.2 PON based mobile backhaul network
PON is a promising solution for mobile backhaul [18, 27]. The conventional PON based
mobile backhaul architecture is depicted as Fig. 3.1. Each ONU is associated with one BS
and connected to the OLT. The OLT distributes/aggregates the traffic from/to the mobile
core network.

: OLT : Splitter
Mobile core
network 13 : Central office

S1/X2
ONU BS
2
S1/X2
S1/X2
20-10

(3,1)
0km

(1,2) (3,3)

(1,1) (3,2)

(2,1) (1,3) (4,2)


(2,3)
(4,1)
(5,2) (4,3)
(2,2)
Intra-PON (5,1)
Inter-PON (5,3)

Figure 3.1: High level view of the conventional PON based mobile backhaul architecture.
© 2017 IEEE (Paper B).

18
Chapter 3 Low-latency PON based mobile backhaul architecture

As introduced in Chapter 1, two interfaces S1 and X2 are defined at BS in mobile


backhaul networks. S1 is used for the communications between the BS and the central
aggregation switch in core networks. X2 is a logical interface for direct inter-BS
information exchanges, which supports functions such as CoMP and handover [28]. In
PON based mobile backhaul networks, both S1 and X2 traffic are transmitted from the
ONUs to the OLT. If the BSs are associated with ONUs within the same PON, the X2
traffic can be handled at the CO where the OLT is located (see green dotted line in Fig. 3.1)
with one-hop forwarding. In such a case, the latency may remain low. However, the fiber
section between the splitter and the OLT can be up to 100 km in a long-reach PON, thus the
latency can be significantly increased. As a result, the stringent end-to-end latency
requirement may not be satisfied. In addition, when the BSs are associated with ONUs that
belong to different PONs, the X2 traffic has to pass several active nodes and segments,
especially in the case the OLT are located in different COs (see red dashed line in Fig. 3.1).

: OLT : Passive components


Mobile core
network 13 : Central office

S1/X2
S1/X2 ONU BS
2
S1/X2
20-10

X2 (3,1)
0km

(1,2) (3,3)

(1,1) (3,2)

(2,1) (1,3) (4,2)


(2,3)
(4,1)
(5,2) (4,3)
(2,2)
Intra-PON (5,1)
Inter-PON (5,3)

Figure 3.2: High-level view of PON based backhaul architecture with modified remote nodes.
© 2017 IEEE (Paper B).

In order to solve the above issue on latency, PON based mobile backhaul networks have
been studied. The key idea is to enhance the connectivity between neighboring BSs for
supporting inter-BS communications. As shown in Fig. 3.2, for BSs that are associated to
ONUs within the same PON, the X2 traffic can be directly transmitted to the OLT through
the passive components (see green dotted line) without being handled by the OLT or the
active nodes located farther. Here, the passive components can be a splitter-box containing
several passive combiners and diplexers or a multi-port in and multi-port out arrayed

19
Chapter 3 Low-latency PON based mobile backhaul architecture

waveguide grating (AWG) [29-31]. However, if the BSs are associated to ONUs that
belong to different PONs, the X2 traffic will have to be first sent to the OLT and even
farther to the mobile core networks for processing. Therefore, the existing solutions cannot
support the low-latency inter-BS communication, especially for the case where the OLT are
located in different COs. On the other hand, enhancing the connectivity in PON may lead to
increased risk of signal conflicts, which brings challenges for the control plane design.
Nevertheless, the existing media access control (MAC) protocols for PON (e.g., multipoint
control protocol (MPCP) [32, 33]), are far from sufficient to support such a control-plane
problem, on which there are very few studies focusing.
In this chapter, we propose a novel PON based mobile backhaul architecture that realizes
the enhanced interconnections among neighboring remote nodes, as an extension of the
existing PON based backhaul architectures. Fig. 3.3 shows the high-level view of the
proposed architecture, in which the connectivity among BSs associated to the same PON
(named intra-PON) and also the ones associated to the neighboring PONs (named inter-
PON) can be enhanced. The physical layer design of the proposed architecture and its
tailored communication protocol will be introduced in the following sections.

: OLT : Passive components


Mobile core
network 13 : Central office

S1
S1 ONU BS
2
S1
X2
20-10

(3,1)
0km

X2 (3,3)
(1,2)
(1,1) (3,2)

(2,1) (1,3) (4,2)


(2,3)
(4,1)
(5,2) (4,3)
(2,2)
Intra-PON (5,1)
Inter-PON (5,3)

Figure 3.3: High-level view of the proposed PON based architecture with interconnections among
neighboring remote nodes. © 2017 IEEE (Paper B).

3.2 Physical layer architecture design


Figure 3.4 presents the detailed physical-layer design of the proposed PON based mobile
backhaul architecture. As shown in the figure, PON1 is connected with other (M-1) PONs.
Here, PON1 is referred to as the serving PON, and its connected PONs are target PONs. A

20
Chapter 3 Low-latency PON based mobile backhaul architecture

serving PON and its target PONs form a virtual communication group (VCG). Note, any
PON can become a serving PON and form its own VCG.
In a VCG, each PON is connected with the OLT and N ONUs through a (M+2)×N
splitter. At the side towards the OLT, (M-1) ports of each splitter are reserved to connect its
adjacent splitters for inter-PON communications, while another two ports are connected to
each other by an isolator, enabling the upstream data to be broadcasted to ONUs within the
PON, thereby inter-PON and intra-PON communication can be directly achieved without
through any active nodes (see green dotted line and red dashed line in Fig. 3.4). When the
number of the supported ONUs in one PON is large, the power attenuation induced by the
multiport splitters may become unendurable. In this regard, optical amplifiers can be used
to increase the link power budget by deploying between the splitter and the isolator or
between the neighboring splitters.

Serving Intra-PON Inter-PON


ONU1,N Amplifier
PON1 Target PONM
...

...
ONU1,M-1
...
Target PON3
...

ONU1,2 Target PON2


o
Splittero
Splitter
ta
r

sI
l

ONU1,1 λβ1,λ1 ONU2,N

...
TX(T)
WDM

λu ONU2,2
TX(T)
ONU2,1 λβ2,λ1
Splitter

λβ1,λ1 TX(T)
RX(T)
...

λu
WDM

λd TX(T)
RX(T) λβ2,λ1 RX(T)
λd
RX(T)
TX
λd Filter
OLT1 TX
λu λd
RX T:Tunable OLT2
λu RX

To/From Metro/Core Network To/From Metro/Core Network

Figure 3.4: PON based mobile backhaul network architecture for interconnection among adjacent remote
nodes. © 2017 IEEE (Paper B).

As shown in Fig. 3.4, each ONU is equipped with two tunable transceivers (TRx). One
is used for the transmission of S1 traffic between the ONU and the associated OLT, while
the other one is responsible for the inter-BS communications in both intra-PON and inter-
PON cases (i.e., X2 traffic).
In addition, in order to avoid signal conflicts and to realize fast communications for
inter-BSs, we designed a wavelength assignment scheme tailored for the proposed
architecture. In detail, a pair of wavelengths {λu, λd} is used to for the upstream and
downstream communication between the OLT and ONUs in one PON, respectively. In a
VCG, each PON is assigned with one unique wavelength ( 𝜆𝜆𝛽𝛽𝛽𝛽 ) to achieve inter-BS
communication within PONi (i=1, 2…M). While M-1 ONUs in PONi (i.e., ONUi,h,

21
Chapter 3 Low-latency PON based mobile backhaul architecture

h=1,2,3…M-1) are specified as relay ONU, each of which is assigned with another
wavelength belonging to the wavelength subset Pi for the inter-BS communications with its
neighboring ONUs associated to PONk (k=1, 2…M, k≠i) in the same VCG. The detailed
explanations can be found in Paper B.

3.3 Communication protocol


To explain the designed MAC protocol for the proposed architecture, we consider a VCG
consisting of three PONs, each of which has two ONUs for simplicity, as shown in Fig.3.5.
In such VCG, PON1 is the serving PON, and PON2 and PON3 are the target PONs. The data
for inter-BS communications are transmitted periodically within a fixed cycle time (Tc),
which can be divided into two periods (Tc1 and Tc2). Tc1 is used for the intra-PON
communications, while Tc2 is used for the inter-PON communications. At the beginning of
Tc , each ONU in one PON is assigned with a specified time slot to report the size of data to
be sent by broadcasting a request message (see the squares in the beginning of Tc1 in Fig.
3.5). Specifically, the ONUs need to notify other ONUs of their absence, even if they have
no data to be transmitted. After the request messages are received, the polling tables related
to all ONUs will be updated, which contains the information, such as the time stamp, the
source and destination addresses. The detailed descriptions of the polling tables are given in
Paper B.

SN LD RTT TN SN LD RTT TN SN LD RTT TN


T1_2 (1,2) 3000 10 (1,1) T2,1_1 (2,2) 6500 20 (1,1) T3,1_1 (3,2) 4500 20 (1,1)
(3,2) 4500 10 (1,1)
T2_1 SN LD RTT TN SN LD RTT TN
SN LD RTT TN T3_1
T1_1 (2,2) 6500 10 (1,1) (3,2) 4500 10 (1,1)
(1,2) 3000 10 (1,1)
First cycle (TC) Second cycle (TC)
TC1 TC2 TC1
RTT Tg Wavelength tuning
2 4500 2 λβ3 Wavelength tuning Tx
ONU3,2 λβ3
2 1 4500 2 Rx
PON3
ONU3,1 1 T3_1 3,1 4500 3,1 λ2
2 1 4500 2 3,1 4500 3,1
T3,1_1
ONU2,2 2 6500 2 λβ2 λβ2
2 1 6500 2
PON2 λ1
ONU2,1 1 T2_1 2,1 6500 2,1
2 1 6500 2 2,1 6500 2,1
λβ1 T2,1_1 λ2 2 4500 2 λβ1
ONU1,2 2 3000 2
2 1 3000 2 3,1 4500 3,1 2 1
PON1 T3,1_1
1 T1_1 λ1 1 T1_2
ONU1,1 2 1 3000 2 2,1 6500 2,1 2 1 4500 2
T2,1_1

SN:Serial number of source ONU LD:Length of Data T:Polling table : Data-recieving


RTT:Round-trip time (ONU-RN) Tg : Guard time :Data-forwarding
TN: Serial number of target ONU : Request message : Data-sending

Figure 3.5: Medium access control protocol.

22
Chapter 3 Low-latency PON based mobile backhaul architecture

With the information contained in the polling table, each ONU can schedule the
bandwidth according to specific bandwidth allocation algorithm and calculate the starting
time of each time slot for data transmission. At the end of the data transmission, a new
request containing the information about the needed bandwidth in the next round of Tc1 is
sent. Once the current period Tc1 is finished, all relay ONUs tune the wavelength for intra-
PON communication to the wavelengths for inter-PON communications. Similarly, the
relay ONUs need to report their data sizes and update the polling tables in Tc2. If the
destination is the relay ONU in the other PON, the data are received in Tc2. Otherwise, the
data need to be forwarded in Tc1 of the next cycle to its final destination, as shown in Fig.
3.5. The details of the designed protocol are introduced in Paper B.

3.4 Bandwidth allocation algorithm


This section presents the proposed DBA algorithm. Each ONU can request a transmission
time slots (Re) as much as it needs while not exceeding the maximum available
transmission time lots (Gmax), which means the granted transmission time slot is

Ge= Min (Gmax , Re). (3.1)

In order to avoid that the channel is monopolized by one ONU, Gmax should satisfy the
following constraint

Gmax ≤ (Tc1− Treq)/N− Tg, (3.2)

where, Treq is the time for transmitting request messages by N ONUs in one PON, Tg
represents the guard time between two consecutive time slots for different ONUs. To avoid
collision at splitters, the starting time of the (i+1)th time slot is carefully calculated as

𝑇𝑇𝑠𝑠𝑖𝑖+1 = 𝑇𝑇𝑠𝑠𝑖𝑖 + 𝑅𝑅𝑅𝑅𝑅𝑅 𝑖𝑖 ⁄2 − 𝑅𝑅𝑅𝑅𝑅𝑅 𝑖𝑖+1⁄2 + 𝑇𝑇𝑒𝑒𝑖𝑖 + 𝑇𝑇𝑔𝑔 , (3.3)

where, 𝑇𝑇𝑠𝑠𝑖𝑖 , 𝑇𝑇𝑒𝑒𝑖𝑖 are the starting time and duration of 𝑖𝑖 𝑡𝑡ℎ time slot, and 𝑅𝑅𝑅𝑅𝑅𝑅 𝑖𝑖 is the round-trip
time (RTT) for propagation between the source ONU and the splitter in the 𝑖𝑖 𝑡𝑡ℎ time slot.

3.5 Performance evaluation


The performance of the proposed scheme in terms of average end-to-end (E2E) packet
latency and jitter is investigated by numerical simulation. E2E packet latency consists of
propagation delay, processing delay for switching implemented at the OLT, and queueing
delay. The average E2E latency is the average value of the E2E latencies for both inter-
PON and intra-PON communications. Some assumptions are further explained in Paper B.
The simulator and definitions of the metrics used have been introduced in Chapter 2, and
Table 3.1 shows the parameters used in the simulation.

23
Chapter 3 Low-latency PON based mobile backhaul architecture

Table 3.1: Simulation parameters.

Parameter Value

Number of PONs in one VCG (M) 6


Number of ONUs in one PON (N) 5
Distance between two adjacent splitters 2 km
Propagation delay within the optical links 5μs/km
Guard time 1µs
Wavelength tuning time 1µs [34]
Link rate in both upstream and downstream 10Gbps
Buffer size for each ONU 50M bytes
Processing time at active nodes 0.2 ms [29]
Distance between ONU and splitter 1 km
Size of the VCG (considering that one BS may need the connectivity to
20-30 [35]
around 20-30 neighboring cells)
Confidence level 95%

As discussed, each polling cycle can be divided into two cycles (Tc1 and Tc2) for intra-
PON and inter-PON communications, respectively. We define the ratio of Tc1 over Tc2 as Rc.
In one VCG, all UEs are assumed to have the same probability to communicate with each
other. Therefore, the ratio of the size of the data for inter-PON communications over that
for intra-PON communications (𝑅𝑅𝑑𝑑 ) can be calculated by

𝑅𝑅𝑑𝑑 = (N×(M-1))/(N×M-N-1). (3.4)

Here, the number of wavelengths used for intra-PON communication in each PON is 1,
while that for inter-PON communication is M-1. To maximize the bandwidth utilization,
the amount of data that is to be sent during Tc2 and Tc1 have better to be proportional to the
average amount of actual data generated for the inter-PON and intra-PON communications.
According to this principle, the value of Rc can be calculated by
𝑅𝑅
𝑅𝑅𝑐𝑐 =(𝑀𝑀−1)
𝑑𝑑
⁄1
=N/(N×M-N-1). (3.5)

In the simulation, when N = 5 and M = 6, Rc is 0.21. As shown in Fig. 3.6, the average
E2E latency can be less than 1 ms when the X2 traffic load is lower than 0.6. For the
latency requirement of less than 1 ms, the performance reaches the best when Rc = 0.21,
which further verifies our conclusion.

24
Chapter 3 Low-latency PON based mobile backhaul architecture

3
10
Rc=0.18

Rc=0.21
2
10
Rc=0.24

Average latency(ms) 10
1

1ms
0
10

-1
10
0.3 0.4 0.5 0.6 0.7 0.8 0.9
X2 traffic load

Figure 3.6: Average latency versus X2 traffic load for Tc = 0.5 ms. © 2017 IEEE (Paper A).

2
10
Tc=0.5 ms

Tc=1 ms

Tc=1.5 ms
1
10
Average latency(ms)

0
10

-1
10
0.3 0.4 0.5 0.6 0.7 0.8 0.9

X2 traffic load

Figure 3.7: Average latency versus X2 traffic load with Rc = 0.21. © 2017 IEEE (Paper A).

We further compare the performance of the proposed scheme with two benchmarks,
which are based on the conventional PON based mobile backhaul networks (see Fig. 3.1).
In Benchmark 1, the same transmission channel is used for X2 traffic and S1 traffic, while
in Benchmark 2, X2 traffic and S1 traffic are respectively transmitted in specific
transmission channels. It can be seen in Fig. 3.8 that, compared to the two benchmarks, the
proposed scheme performs the best in terms of average E2E latency. The average E2E

25
Chapter 3 Low-latency PON based mobile backhaul architecture

latency for most packets is less than 0.5 ms even when considering the fluctuation of the
latency, which is the lowest among three schemes.

2
10

Proposed scheme
Benchmark 1
Benchmark 2
1
10

1.2 ms
Latency(ms)

0
10 0.3 ms 0.5 ms

-1
10
0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

Load

Figure 3.8: Latency vs. total traffic (where X2 traffic is 10% S1 traffic). Rc= 0.21, Tc =0.5 ms, L=20 km.
© 2017 IEEE (Paper A).

In Paper B, the effects of the S1 traffic load and the distance between OLT and splitters
(L) on the performance of the proposed scheme are further studied. The simulation results
show that the average E2E latency can maintain less than 1 ms no matter what S1 traffic
load and L are.
In addition, the impact of user mobility in terms of UE speed and density on the
performance of the proposed scheme is studied. UE in a given cell is assumed to move to
its neighboring cell with an equal probability. And the handover requests arrival follows the
Poisson process. Once a handover occurs, both control-plane traffic and user-plane traffic
are exchanged through X2 interface. Here, the control-plane traffic includes Handover
Request message, Handover Request Acknowledge message, Status Transfer message and
Release Resource message, while user-plane traffic is referred to as the forwarded user
data. The Handover frequency in one cell is related to, the density of active UEs (U), the
average speed (S, km/h) of the UE and the diameter of a cell (D, km). In each handover
procedure, four control messages are exchanged [34], the size of which can be expressed as

𝐶𝐶𝑥𝑥2 = 4 ∙ 𝑚𝑚 ∙ 𝑈𝑈 ∙ 𝑆𝑆 ∙ 𝐷𝐷/3600 (bit/s). (3.6)

In the simulation, the average length of the control message (𝑚𝑚), is set to 120 bytes. U =
ρ⋅p with ρ denoting the number of UE per km2 (referred to as UE density) and p

26
Chapter 3 Low-latency PON based mobile backhaul architecture

representing the probability of the UE in an active session. Similarly, user traffic that is
transmitted between the source and the target ONU can be expressed as

𝑈𝑈𝑥𝑥2 = 𝑈𝑈 ∙ 𝑆𝑆 ∙ 𝐷𝐷 ∙ 𝑇𝑇 ∙ 𝐹𝐹/3600 (bit/s), (3.7)

where T is typically 0.05s, which denotes the duration time of the handover. F represents
the traffic flow per UE (bit/s), which includes multiple services (e.g., voice, web browsing,
file and video download), following uniform distribution in range of [0,100] Mbit/s [36].
Figure 3.9 shows that with the latency requirement constraint of 1 ms for mobile
backhaul networks, the maximum U can be up to 67000 per km2, when S = 40 km/h. The
maximum U decreases to about 5000 per km2 as S increases to 500 km/h. It can be seen that
the proposed scheme can well support large group and high-speed mobility, with some of
the typical requirement in 5G scenarios satisfied (i.e., 2000 per km2 for S<500 km/h) [37].

100
S=40 (km/h)
S=100 (km/h)
Percentage of packets(latency <1ms)

80 5000
S=250 (km/h)
S=500(km/h)

60
11500

40 28000

20 67000

0
0 2 4 6 8
4
Density of active UE (per km2) 10

Figure 3.9: Percentage of packets (latency<1 ms) as a function of the density of active UE.
© 2017 IEEE (Paper B).

3.6 Summary
In this chapter, we present a PON based mobile backhaul architecture, in which the
connectivity among adjacent BSs can be enhanced by adding fibers between splitters. In
addition, a tailored communication protocol and dynamic bandwidth allocation algorithm
are proposed. Simulation results show that less than 1ms communication latency can be
achieved in the proposed scheme. Such architecture is also compatible with the legacy
TDM-PONs and can be regarded as the primary option for next generation PON state 2
(NG-PON2) technologies, i.e., TWDM- PON.

27
Chapter 3 Low-latency PON based mobile backhaul architecture

On the other hand, deploying new fibers between splitters belonging to different PONs
may bring a new issue of cost. Thus, how to minimize the cost induced by such fiber
connections needs to be further investigated.

28
Chapter 4

Service migration strategy in FeCN


In this chapter, the concept of fog computing enabled cellular networks is presented. To
deal with user mobility issue in such FeCNs, a QoS aware scheme is proposed, in which
service migration decision is made based on the QoS metrics.

4.1 Background and motivation


A fog node can be a terminal or standalone node, which can be co-located with the existing
cellular network infrastructure, such as router, gateway, aggregation points and BS, as
shown in Fig. 4.1 [38, 39]. Among them, BS (e.g., LTE evolved Nodes B) is a promising
segment that can be integrated with fog nodes, which forms BS-Fog, giving rise to a new
concept of fog enabled cellular networks (FeCNs). Such FeCN can be a promising
candidate to support real-time services (e.g., real-time vehicular services) due to the
ubiquitous access to RAN infrastructure as well as low communication delay enabled by
fog computing,
In FeCN, the BSs are responsible for providing network functions (e.g., handovers),
whereas computational resources (e.g., computing and storage capability) can be provided
by the fog nodes locally. One BS-Fog can cooperate with other BS-Fogs or cloud to
allocate tasks dynamically. As introduced in Chapter 1, two interfaces (X2 and S1) are
defined at each BS, and can be used to support the above mentioned cooperation. In detail,
S1 that is assigned for the BS to communicate with the central aggregation switch in the
mobile core network can be used to realize the communications between the BS-Fog and
cloud. While the logical interface for direct information exchange between the BSs, X2, can
be used for the communications among the BS-Fogs. Although this architecture has lots of
advantages, there are still several challenges that prevent it from being a reality, e.g., user
mobility.

29
Chapter 4 Service migration strategy in FeCN

Figure 4.1: High-level view of fog computing enabled cellular networks.

We take connected vehicles as a use case to investigate the above issue of user mobility.
In the context of FeCN, a vehicle is traveling while accessing a fog node. To maintain the
service continuity, the vehicle needs to continue accessing the fog node and the provisioned
running services through backhaul networks, as shown in Fig. 4.2. When the vehicle travels
away from the serving BS-Fog (BS-Fog 1 in Fig. 4.2), the end-to-end latency will increase,
especially for the case that the vehicle does not have fixed routes.

BS-Fog1 BS-Fog3
S
BS-Fog2

Cell1 Cell2 Cell3

Moving trace

Access the service which is not in the closest BS


Access the service which is in the closest BS

Figure 4.2: Illustration of no service migration. © 2019 IEEE (Paper C).

In order to keep the vehicle always accessing the fog services in one hop, the ongoing
service can be migrated following the vehicle’s trace, as shown in Fig. 4.3. Such migration
can be performed in combination with the handover procedure. On the one hand, one-hop
access means that a vehicle can directly access the services at its associated BS-Fog. On the

30
Chapter 4 Service migration strategy in FeCN

other hand, service migration cannot always be completed immediately. It may lead to the
situation where users experience loss of service access. Therefore, frequent service
migration is not always a good choice.

BS-Fog1
BS-Fog3
S
BS-Fog2 S

S
S S

Cell1 Cell2 Cell3

Access the service which is in the closest BS


Service migration

Figure 4.3: Illustration of service migration triggered by handover. © 2019 IEEE (Paper C).

There have been several existing works that handle the mobility issue in mobile edge
computing (MEC) or fog computing [40-45]. Here, MEC has a similar idea with fog
computing, that is, the computing and storage are delivered at the edge network. In [40], the
authors designed a set of general architectural components and a tailored interaction
principle among such components to support VM migration in the paradigm of fog
computing. In [41], the authors proposed a general layered framework, in which the
migrated applications can be decomposed into three layers. In such a framework, only the
layers that are missing at the target fog nodes need to be transmitted from the source fog
node. Thus, the amount of data that needs to be transmitted during the service migration can
be reduced significantly. The authors in [42] proposed a VM handoff mechanism for the
service migration. In such a mechanism, the migration files can be compressed adaptively
by considering the current condition of network and computation, thereby reducing the total
migration time. In [43, 44], a cost model using Markov decision process based approaches
is developed to optimally make service migration decisions by considering user mobility, in
which the migration cost and user experience can be balanced. In [45], a time window
based service migration scheme is proposed to optimize the service placement sequence, in
order to reduce the migration cost. However, most of the existing schemes are based on
abstract models, in which parameters are optimized statically. In these cases, it usually
leads to long time to get an optimized solution. However, the users with high mobility (e.g.,

31
Chapter 4 Service migration strategy in FeCN

vehicles) usually suffer from dynamic situations. For example, in wireless networks, users
need to make a service decision quickly and frequently.

4.2 QoS aware service migration strategy


In this section, we present a QoS aware service migration strategy based on the existing
handover procedure. The proposed scheme combines the two strategies presented in the
above section, as illustrated in Fig. 4.4. The key idea is to minimize the migration overhead
while maintaining the E2E latency at an acceptable level under certain QoS requirements.
E2E latency is a key QoS metric for real-time vehicular communication. When the E2E
latency is at an unacceptable level, the performance of other metrics (e.g., reliability and
packet drop) can also get worse. Therefore, in the proposed scheme we extract the metric of
E2E latency to explain our proposed QoS aware service migration strategy. Certainly, the
proposed scheme does not limit its applicability to the E2E latency performance only. By
using the similar principle, other QoS metrics can also be considered.

S
BS-Fog1
BS-Fog3
S
S

BS-Fog2

Cell1 Cell2 Cell3

Migration
Access the service which is not in the closest BS
Access the service which is in the closest BS
Sevice migration

Figure 4.4: Illustration of QoS-aware service migration. © 2019 IEEE (Paper C).

Figure 4.5 shows the communication protocol for the proposed QoS aware service
migration scheme. Once the QoS requirements cannot be satisfied, the source BS-Fog node
will trigger the service migration procedure, and sends a Migration Request message that
contains the information about the QoS requirements of the affected services to the target
BS-Fog. After receiving the Migration Request message, the target BS-Fog will first make
a decision whether to accept or not, and then sends back a Migration Request ACK to

32
Chapter 4 Service migration strategy in FeCN

inform the source BS-Fog of its decision. If the request is agreed, the source Fog-BS will
start implementing the migration.
In the proposed scheme, service migration can be achieved by pre-copy technique which
is widely used for live virtual machine (VM) migration, as presented in [41, 42]. The
migration can be performed in two phases. First, the transfer of memory pages to the target
BS-Fog is completed iteratively without suspending VM. When sufficient memory pages
are transferred, the source BS-Fog will suspend the VM and finish transferring the
remaining memory pages to the target BS-Fog. The services cannot be properly accessed
during the duration when the VM is suspended. Such duration is denoted as downtime (𝐷𝐷𝑡𝑡 ).
When service migration is triggered while the service has not yet been migrated, the UE
still accesses the source BS-Fog (see blue dashed line in Fig. 4.5). Moreover, the UE still
has to access services that are run in the target BS-Fog during the downtime when handover
is completed while the service migration is not, as shown in Fig. 4.5. After the migration is
completed, the UE can directly access the service in one hop (see red dashed line in Fig.
4.5). The details are introduced in Paper C.

UE BS-Fog1 BS-Fog3 EPC

Migration Request Decision for


Migration triggering
migration request
Migration Request ACK

Migration time(Dm)

Downtime
Path Switch Report
Resource Release
Path Switch Report
ACK

Figure 4.5: The communication protocol to support QoS aware service migration. © 2019 IEEE (Paper C).

4.3 Performance evaluation


In this section, the performance of the proposed migration strategy is evaluated by using the
simulator that is described in Chapter 2. Here, the fog computing resources allocated for
migrated services are assumed to be sufficient. This can be realized by designing efficient
fog computing resource management schemes, which will be introduced in Chapter 5. We
consider a case study for a small service area by using the realistic mobility pattern for the
country of Luxembourg. Fig. 4.6 shows the Luxembourg scenario topology, where BS-Fog
entities are evenly distributed over the city. The parameters used in this simulation are
described in Table 4.1.

33
Chapter 4 Service migration strategy in FeCN

Central office
BS-Fog

Figure 4.6: The scenario topology of Luxembourg. (Paper E).

Table 4.1: Simulation parameters. © 2019IEEE (Paper C)

Parameter Value

Coverage of the country of Luxembourg 155 km2 [23]


Total number of vehicles 5500 [23]
Vehicle density 35.5 per km2
Bitrate of traffic generated by the vehicles [2Kbps, 10Mbps] [5,6,47]
Size of the applications encapsulated in VMs [10,100] Mbits
Link speed in upstream and downstream in PONs 10Gbps
Repeated times of simulations 10
Simulation time 1000 s
Coverage of a single BS-Fog 1 km2
Handover interruption time 20 ms [9]
Wireless delay 0.5ms
Vehicles speed [1, 45] m/s [23]
Processing time at active node 0.2 ms [29]
Number of ONUs in each PON 16
Number of PONs 10
Confidence level 95%

34
Chapter 4 Service migration strategy in FeCN

The performance of the proposed scheme (named Scheme 3) is evaluated with


comparison to two benchmarks: no service migration (named Scheme 1) and service
migration triggered by handover (named Scheme 2). As discussed in Paper C, the
handover interruption time and wireless delay cannot be ignored and are not affected by
migration strategies. Therefore, it is assumed that the uplink delay in the wireless segment
is within 0.5 ms, and the handover interruption time is a constant.
The average E2E latency for the three schemes is shown in Fig. 4.7. Here, E2E latency
consists of wireless access delay, interruption time during the handover, migration time,
backhaul delay, and processing and queuing delays at the BS-Fogs, which are explained in
Chapter 2 and Paper C. It can be seen that the E2E latency in all three schemes decreases
as B increasing, especially for Scheme 1. This is because higher bitrate leads to shorter
packet transmission time. Thus, the packet queueing delay can be reduced, resulting in
smaller access latency. When B is high enough (e.g., B = 240 Mbps in Fig. 4.5), the
queueing delay is as minor as negligible. Meanwhile, in Scheme 2, the E2E latency is
mainly affected by downtime ( 𝐷𝐷𝑡𝑡 ), during which the ongoing services need to be
suspended.

25
Scheme 1
Scheme 2 (Dt=0.1s)
20 Scheme 3 (Dt=0.1s)
Scheme 2 (Dt=0.3s)
Average end-to-end delay(ms)

Scheme 3 (Dt=0.3s)
15

10 ms
10

5 ms
5

0
120 140 160 180 200 220 240 260

Transmission capacity(Mbps)

Figure 4.7: Transmission capacity (B) in the backhaul versus average end-to-end latency.
© 2019 IEEE (Paper C).

In Scheme 3, the service migration is triggered once the latency exceeds the threshold
(e.g., 10 ms in Fig. 4.7). Scheme 3 is considered as a tradeoff between Scheme 1 and
Scheme 2. When B is low, Scheme 3 performs similar to Scheme 2, that is, frequent
migrations are performed in both schemes (see Fig. 4.8). As B increases, there are fewer

35
Chapter 4 Service migration strategy in FeCN

and fewer migrations triggered in Scheme 3. Thus, the E2E latency is less and less affected
by downtime. Therefore, Scheme 3 has similar performance with Scheme 1 when B is high,
and performs better than Scheme 2 when downtime is large (e.g., 0.3 s), as shown in Fig.
4.8.

12

10

8
Average migration frequency

Scheme 2(Dt=0.1s)
4
Scheme 3(Dt=0.1s)
Scheme 2(Dt=0.3s)
2 Scheme 3(Dt=0.3s)

0
120 140 160 180 200 220 240 260
Transmission capacity(Mbps)

Figure 4.8: The average number of migration for a vehicle as a function of transmission capacity.
© 2019 IEEE (Paper C).

We also investigated the impact of reliability on the performance of the proposed


migration schemes. Reliability is defined as the probability that the E2E latency under a
maximum allowable latency level, which can be derived from the cumulative distribution
function (CDF) of the end-to-end latency. In the simulation, if the E2E latency exceeds the
predefined maximum limit, the packet is dropped. Therefore, the reliability can be reflected
by the packet drop ratio to some extent. As shown in Fig. 4.9, among three schemes,
Scheme 3 has the highest CDF when the maximum allowable latency is low (e.g., 5 ms),
while the CDF Scheme 1 achieved is the lowest. However, when the maximum allowable
latency increases to 10 ms, Scheme 1 achieves the best performance in terms of CDF, while
Scheme 2 performs the worst. Scheme 3 is a tradeoff between these two schemes.
Furthermore, the CDF for three schemes can be increased by increasing B and reducing
downtime. The detailed explanations are given in Paper C.
In summary, the performance of migration strategies mainly depends on the backhaul
delay and downtime. To maintain backhaul delay at the required level, PON based mobile
networks are investigated later in Chapter 6 and a bandwidth slicing mechanism is
proposed accordingly.

36
Chapter 4 Service migration strategy in FeCN

100
100

80 98

Cumulative distribution function(%) 96


60
0 2 4 6 8 10

40
Scheme1
Scheme2(Dt=0.1s)
Scheme3(Dt=0.1s)
20
Scheme2(Dt=0.3s)

5 ms 10 ms Scheme3(Dt=0.3s)
0
0 2 4 6 8 10 12 14 16 18 20
(a) End-to-end latency(ms)

100

100
80
Cumulative distribution function(%)

99

60 98
0 2 4 6 8 10
Scheme1(200Mbps)
40
Scheme1(220Mbps)
Scheme2(200Mbps)

20 Scheme2(220Mbps)
Scheme3(200Mbps)
5 ms 10 ms Scheme3(220Mbps)
0
0 2 4 6 8 10 12 14 16 18 20
(b) End-to-end latency(ms)

Figure 4.9: (a) Cumulative distribution function versus E2E latency with B = 200 Mbps, (b) cumulative
distribution function versus end-to-end latency with Dt = 0.1 s. © 2019 IEEE (Paper C).

4.4 Summary
In the context of FeCN, mobility is one of the most critical challenges for users to maintain
the service continuity while satisfying the stringent service requirements. Meanwhile,
service migration has been regarded as a promising solution to such mobility problem.
However, since service migration cannot be completed immediately, users may experience
loss of service access. To alleviate such issue, we proposed a QoS-aware service migration
strategy based on the existing handover procedures to balance the benefits and costs of

37
Chapter 4 Service migration strategy in FeCN

migration. To investigate the performance of the proposed schemes, a case study is


implemented through simulation, where realistic vehicle mobility pattern for Luxembourg
scenario is used to reflect the real world environment. Results reveal that with the proposed
scheme, low E2E latency (e.g., 10 ms) for vehicular communication can be achieved, while
the total number of migrations for each user during its whole journey can be decreased
significantly.
Our key idea is to minimize the migration frequency under the QoS requirements of the
affected services. In fact, when the decision of service migration is made, the expected QoS
requirements as well as other factors (e.g., network and computation condition) need also to
be considered. Thus, smart service migration strategies, including the optimal selections of
target fog nodes and migration paths as well as migration triggering mechanism, should be
investigated using advanced machine learning technologies (e.g., deep reinforcement
learning) in future work. On the other hand, after the migration decision is made, the
technique of live VM migration is used for transferring memory pages, during which the
services cannot be properly accessed. Thus, advanced live VM migration techniques need
also to be investigated to further reduce the negative effects on the migrated services.

38
Chapter 5

Distributed fog computing resource


management in FeCN
In this chapter, a distributed fog computing resource management scheme is presented in
order to guarantee sufficient computation resource for the migrated vehicular services, thus
reducing the service migration blocking caused by the lack of computing resources.

5.1 Background and motivation


As mentioned in Chapter 4, fog computing can be integrated with the widely deployed
cellular networks, forming the concept of fog computing enabled cellular networks. Such
architecture can bring computational resources as close as possible to the end users, which
has a great potential to satisfy the stringent latency requirements of real-time vehicular
services. However, there are still several challenges before making it into reality such as
user mobility and limited coverage of one BS-Fog. To solve the above issues, real-time
services are expected to be migrated following the mobility of vehicles, thus can be always
accessed with one-hop access delay. Here, one-hop access means that a vehicle can directly
access services at its associated BS-Fog.
During the service migration procedure, sufficient computation resource is needed to
host the migrated services. Also, provisioning resource for the migrated real-time services
needs to be completed as soon as possible to minimize the service interruption. Otherwise,
the services have to stay at the source fog node, which may increase the access delay. On
the other hand, one single fog node only has limited amount of computation resource and
its load is highly burst due to the mobility of vehicles. Thus, it is important to employ
efficient resource management scheme, especially in the scenarios with fast mobility and
high load.
Paper D presents two distributed fog computing resource management schemes, namely
fog resource reservation (FRR) and fog resource reallocation (FRL). In FRR scheme, a
certain amount of computation resources for vehicular services in each fog node are

39
Chapter 5 Distributed fog computing resource management in FeCN

reserved based on the predicted vehicular traffic load. The performance of this scheme
depends on the traffic flow prediction methods. Overestimating leads to low resource
utilization, while underestimating significantly decreases one-hop access probability for
high-priority (HP) vehicular services (e.g., remote driving, pre-crash sensing warning). The
key idea of FRL is to release part of fog resources used for low-priority (LP) services (e.g.,
online game, navigation, sign notification) and reallocate them to HP services. However, in
such a scheme, the one-hop access probability for LP services may be affected, especially
when traffic load is high. In fact, not all the LP services (e.g., online game) need to have
one-hop latency requirement and local awareness. Therefore, such services can be placed in
its neighboring fog nodes with low load.
There have been several existing works on cooperation among a cluster of fog or
cloudlet nodes to satisfy the QoS requirements of services [48-50]. In [48], the authors
proposed a new paradigm for vehicular networks to support data-heavy vehicular
application, in which computation resource utilization is improved through the inter-
cooperation between cloudlets. In [49], the authors proposed a load balancing algorithm for
cloudlets within wireless metropolitan area networks. In the proposed scheme, the tasks that
run in the heavy-load cloudlets to be reallocated to the light-load ones dynamically, thus the
average response time of tasks can be shorten. In [50], the authors investigated a fog node
cooperation strategy, in which workload allocation strategy is optimized to maximize user’s
quality-of-experience under the given power efficiency.
However, the computation complexity or communication overhead to achieve optimal
solutions in the above schemes is usually high, which leads to large processing time,
especially in a large network. Thus, these schemes are usually implemented in an offline
fashion. In addition, scalability is also one of the challenges for the centralized offline
resource sharing schemes, particularly for the vehicles that normally traverse across a large
area [51]. In this regard, we propose a distributed fog computing resource management
algorithm. Once a fog node becomes overloaded and does not have sufficient resource for
the incoming HP services, LP services are migrated to its neighboring fog nodes and their
resources are released for the HP services. To minimize the negative effects on the affected
LP services, two tailored algorithms are investigated, namely migrating LP service
selection and neighboring fog node selection. As a case study, the realistic mobility pattern
for the city of Luxembourg is considered.

5.2 BS-Fog architecture


The architecture of a BS-Fog is shown in Fig. 5.1, in which BS is responsible for
communication functions. For example, the handover mechanism is responsible for
managing network limitations and coverage area. Based on the existing handover
procedure, service migration mechanism is also introduced, as mentioned in Chapter 4.
Fog node provides computational resources (e.g., computing and storage). In such an
architecture, BS-Fog can cooperate with the other BS-Fogs and cloud to allocate tasks
dynamically.

40
Chapter 5 Distributed fog computing resource management in FeCN

Mobile backhaul

BS-Fog
BS-Fog
BS-Fog
Management module

Fog server HP services


... ...

Radio resource

Figure 5.1: BS-Fog architecture. (Paper E).

5.3 Distributed computing resource management


When a vehicle moves from one BS-Fog’s coverage area to another, both handover and
migration of ongoing services need to be handled. As shown in Fig. 5.2, once the handover
is triggered, the source Fog-BS sends a Migration Request message to the target BS-Fog,
which includes the information of the requested resources. After receiving this message, the
target Fog-BS will make a decision whether to accept the request or not according to the
resource management strategy.
In this chapter, we design two scenarios used for the resource management strategy. In
the first scenario, there is sufficient available resource in the target BS-Fog, the request is
approved and the corresponding resource is assigned to the migrated HP services (see
Scenario ① in red dotted box in Fig. 5.2).
In the second scenario, the target BS-Fog does not have enough available resource. In
such a case, the target BS-Fog selects ongoing LP services to be migrated to its neighboring
BS-Fogs, and the resource of the selected LP services will be released for HP services (see
Scenario ② in blue dashed box in Fig. 5.2), which can be divided into two phases. In the
first phase, LP services to be migrated are selected according to service selection strategies.
While the purpose of the second phase is to select one of its neighboring BS-Fogs to host
the migrated LP services, in which the target BS-Fog broadcasts a Migration Request
message to its neighboring BS-Fogs that have direct communication with the target BS-Fog
via X2 interface. Once the neighboring BS-Fogs receive the Migration Request messages,
they will check their available resource, and then send Migration Request ACK to the target
BS-Fog. A final decision on which neighboring BS-Fog is selected will be made by the
target BS-Fog immediately. When the selected LP services complete the migration and
release their resources, the target BS-Fog sends a Migration Request ACK to the source
BS-Fog. If there are no available LP services or neighboring BS-Fogs, the to-be-migrated
HP service still needs to run at the source BS-Fog and the vehicle has to access the service

41
Chapter 5 Distributed fog computing resource management in FeCN

with more than one hop (i.e., the hop(s) between two neighboring BS-Fogs have to be
counted for access), which may obviously result in a higher access delay. The selection
strategies of LP services and neighboring BS-Fogs are discussed in the following section.

Vehicle Source BS-Fog Target BS-Fog Neighboring BS-Fogs

Triggering handover

Checking ongoing service


Migration Request

Checking available resource

Assigning resource Scenario ①


Access the target BS-Fog with two hops

Selecting LP services
Scenario ②
to be migrated
Migration Request

Checking rest of
resource
Migration Request ACK
Selecting BS-Fog from
neighboring BS-Fogs
Migrating LP service
Assigning resource

Migration Request ACK


Migrating HP service
Access the target BS-Fog with one hop

Figure 5.2: Illustration of resource management in service migration procedure. (Paper E).

5.3.1 Low-priority service selection algorithm


In the proposed scheme, the selected LP services are migrated from the target BS-Fog to
the selected neighboring BS-Fog, which will be accessed via backhaul networks. In such a
procedure, a certain amount of backhaul bandwidth is needed for service migration and
running the selected LP services. To minimize the required bandwidth, we propose a LP
service selection algorithm to minimize the communication cost. The amount of the
required bandwidth resources for each LP service is firstly counted. Then, the LP services
whose available computing resources are larger than the requested amount are selected.
Among these services, the one with the lowest communication cost is finally selected. If
there is no service that satisfies the requirement alone, more than one service can be
selected. In order to avoid ping-pong effect in LP service migration, LP services are only
allowed to be migrated once. The details are explained in Paper E.

42
Chapter 5 Distributed fog computing resource management in FeCN

5.3.2 Neighboring Fog-BS selection algorithm


Once the LP services to be migrated are decided, neighboring BS-Fogs that will host the
migrating services need to be decided. Besides, after LP service is migrated, it is hosted at
the selected neighboring BS-Fog and accessed within two hops, which results in extra
backhaul delay. As LP services are only allowed to be migrated once, the access delay for
the migrated LP services consists of radio access delay and backhaul delay. According to
the delay requirement of the selected LP service, the budget of backhaul delay can be
calculated. The transmission delay between the target BS-Fog and its neighboring BS-Fog
should be smaller than the budget of backhaul delay. The key idea of the proposed
algorithm is to select the neighboring BS-Fog with the most available resources under the
backhaul delay requirement of the to-be-migrated LP service. The details are explained in
Paper E.

5.4 Performance evaluation


In this section, the performance of the proposed scheme is investigated through simulation
described in Chapter 2. The realistic mobility pattern for the city of Luxembourg is used.
The scenario topology of Luxembourg can be found in Fig. 4.6 in Chapter 4. Fig. 5.3
shows the vehicular traffic profile in Luxembourg, which varies in time over a day. Also,
the vehicular traffic is spatially diverse. For example, the inserted chart (a) in Fig. 5.3
shows the numbers of vehicles in each coverage area of BS-Fogs at 8:00 am, while the
inserted chart (b) in Fig. 5.3 shows the numbers of vehicles at 12:00 pm. The Y-axis shows
the number of vehicles, while the X-axis is the series number of BS-Fog.

6000
100
50
5000
0
0 50 100 150
4000 (a)
Number of vehicles

3000

2000

40
1000 20
(b)
0
0 50 100 150
0
0 4 8 12 16 20 24
Time(hours)

Figure 5.3: Vehicular traffic profile over day. (Paper E).

43
Chapter 5 Distributed fog computing resource management in FeCN

Each vehicle is assumed to only require one HP service (i.e., safety-related service). The
data traffic distribution is proportional to the vehicular traffic. Without loss of generality,
we assume the total service request arrival rate for the BS-Fog network is in the range of
from 20 to 100 (per second). The arriving requests consist of 30% of HP and 70% of LP
services. The HP service arrival rate is distributed among the BS-Fogs according to the
traffic profile at 8:00 am (see Fig. 5.3(a)), while the LP service arrival rate is distributed
among BS-Fogs evenly. Both HP and LP services arrive according to Poisson Procedure.
The parameters are shown in Table 5.1.

Table 5.1: Simulation parameters. (Paper E).

Parameter Value

Total number of computing units in one fog 400


Number of BS-Fogs in the network 100
Number of computing units for each HP service 3
Number of computing units for each LP service [2,6]
Average serving time in one fog for HP service (second) 90
Standard deviation of the serving time in one fog for HP service (second) 10
Average serving time in one fog for LP service (second) 120
Budget of backhaul delay for LP services(millisecond) [5,10]
Data rate generated by end users (bps) [2K, 10M][52]
Amount of data encapsulated in application VMs (Mbits) [10,100]
Downtime in live VM migration (millisecond) 20
Confidence level 95%

In this section, the performance of the proposed scheme is investigated in terms of


access probability for HP services and service unavailability for LP services. Here, one-hop
access probability is defined as the ratio of the one-hope service access duration to the
total holding time. Similarly, service unavailability is defined as the ratio of the time when
service is not available to the total holding time. Here, the services may be unavailable due
to the lack of resources in the current BS-Fog and its neighbors, as well as due to the
interruption during service migration.
As discussed, to increase one-hop access probability while reducing service
unavailability, migration time for both HP and LP services should be minimized, which is
related to the transmission time in the mobile backhaul network. Transmission capacity
becomes the main factor that affects the delay performance, which has been pointed out in
Paper G. Here, the backhaul capacity (B) is referred to as the bandwidth allocated to the
X2 interface between two Fog-BSs.

44
Chapter 5 Distributed fog computing resource management in FeCN

0.95

One-hop access probability(HP)


0.9

B=200Mbps,N=10
0.85 B=200Mbps,N=20
B=100Mbps,N=10
0.8 B=100Mbps,N=20

0.75
20 40 60 80 100

(a) Service arrival rate(per second)


0.25

0.2
B=200Mbps,N=10
Service unavailability(LP)

B=200Mbps,N=20
0.15 B=100Mbps,N=10
B=100Mbps,N=20
0.1

0.05

0
20 40 60 80 100

(b) Service arrival rate(per second)

Figure 5.4: One-hop access probability and service unavailability versus service arrival rate
(Paper E).

Figure 5.4(a) shows that one-hop access probability for HP services versus service
arrival rate. As expected, larger transmission capacity (B) leads to higher one-hop access
probability. The reason is that the migration delay for both HP and LP services can be
reduced by enlarging backhaul capacity. The reduction of migration time benefits from the
increment of one-hop access probability for HP services. On the other hand, when B is large
(e.g., B=200Mbps), the one-hop access probability benefits from a higher number of
neighbors (N). However, when B is small, the increase of N has little impact on the one-hop
access probability for HP services. The reason is that high backhaul delay between the
target and the neighboring BS-Fogs increases with transmission capacity decreasing.

45
Chapter 5 Distributed fog computing resource management in FeCN

0.95
One-hop access probability(HP)

0.9

FCFS
0.85 FRR
ORM(B=100Mbps)

0.8 ORM(B=200Mbps)

0.75
20 40 60 80 100

(a) Service arrival rate(per second)


0.25
FCFS

0.2 FRR
ORM(B=100Mbps)
Service unavailability(LP)

ORM(B=200Mbps)
0.15

0.1

0.05

0
20 40 60 80 100

(b) Service arrival rate(per second)

Figure 5.5: One-hop access probability and service unavailability versus service arrival rate .
(Paper E).

The number of neighboring BS-fogs that satisfy the latency requirement decreases, even
when N is high. Fig. 5.4(b) shows LP service unavailability as a function of service arrival
rate. Similarly, increasing backhaul capacity leads to a lower migration delay, thus service
unavailability can be also reduced.
We further compare the performance of the proposed scheme (named ORM) with two
benchmarks. The first benchmark is based on the principle of first in first served (FCFS), in
which HP and LP services are treated equally. In the second benchmark, a certain amount
of resources are reserved for HP, named FRR. Fig. 5.5(a) shows that the one-hop access

46
Chapter 5 Distributed fog computing resource management in FeCN

probability of HP services for ORM is higher when B is large (e.g., B=200Mbps), compared
with two benchmarks (e.g., FCFS and FRR). However, the one-hop access probability for
ORM with B=100Mps is higher than that for FCFS and FRR when service arrival rate is
below 60 arrivals per second, but lower than that for FRR before service arrival rate
increases to above 60 arrivals per second. The reason is that more LP services need to be
migrated to the neighboring BS-Fogs with the service traffic arrival rate increasing, which
results in high migration traffic and thus increases the migration delay.
Figure 5.5(b) shows that the service unavailability increases with service arrival rate.
Compared with the two benchmarks, the service unavailability for ORM is the lowest when
B= 200 Mbps. That is because large B results in a shorter time used for migrating LP
services, thus service unavailability can be reduced. However, when B decreases to
100Mbps, the service unavailability for ORM becomes higher, which is similar to that for
FCFS before service arrival rate increases to 60 arrivals per second, but becomes lower than
FCFS with service arrival rate increasing. The reason is that, in the case of FCFS, the
migration delay in ORM is mainly introduced by the waiting time for the available
resources. In addition, larger arrival rate leads to a longer waiting time, thus degrading the
performance of FCFS. It can be seen that the migration delay is an important factor that
affects the performance of the proposed ORM in terms of one-hop access probability and
service unavailability.

5.5 Summary
This chapter presents a distributed fog computing resource management scheme to support
service migration, in which real-time services are allocated with resource with high priority,
especially when the traffic load is high. In such a scheme, a communication-efficiency LP
services selection algorithm and a delay-aware neighboring BS-Fog selection algorithm are
proposed to reduce the negative effects on the migrated LP services. Simulation results
show that the one-hop access probability for both HP and LP services can be much
improved.
The principle of the proposed LP services selection algorithm is to minimize
communication overhead caused by migration. On the other hand, some of LP services are
also time-sensitive. As one of the future works, another LP services selection algorithm will
be designed by considering both communication costs and QoS requirements of LP
services.

47
Chapter 6

Bandwidth slicing in mobile backhaul


networks
In this chapter, a novel bandwidth slicing mechanism in PON based backhaul is presented
to balance the performance for transmitting the migration traffic and non-migration traffic.

6.1 Background and motivation


In the context of FeCN, service migration strategy and fog computing resource
management scheme have been investigated to support real-time vehicular services in the
previous chapters. As discussed, mobile backhaul capacity is a main factor that affects the
performance of the service migration schemes. And PON based mobile backhaul network
can be considered to support FeCN due to its high capacity, as introduced in Chapter 3.
In PON based mobile backhaul network supporting FeCN, the BS-Fogs are integrated
with ONUs through high-speed Ethernet interface, shown in Fig. 6.1. The traffic generated
by service migration, named migration traffic, is transmitted together with the non-
migration traffic. On one hand, the size of the data generated by service migration can be
up to hundreds of MBytes [52], thus, such migration traffic can be fragmented into Ethernet
frames at ONUs and should be carefully handled by the OLT that is located at the central
office. On the other hand, service migration is usually deadline-driven. We define migration
delay as the time duration from the moment when a migration is initiated until the affected
service is successfully transferred to the target fog node. In order to minimize the service
interruption, the migration delay should be lower than a pre-defined time threshold, which
is usually in the magnitude of seconds [53-55]. Here the deadline depends on the QoS
requirements. For instance, the deadline for safety-related vehicular applications is shorter
than that for online video games. The non-migration traffic includes data generated by
multi-type applications, which usually have different QoS requirements in terms of such as
latency and packet loss ratio. These different types of the non-migration traffic can be
queued independently and scheduled with different priorities according to medium access

49
Chapter 6 Bandwidth slicing in mobile backhaul networks

Central office Core


OLT Network
Cloud

Non-migratation Migratation
Splitter data data

... ...

ONU 1
ONU2
BS-Fog 1 ONUn

BS-Fogn
BS-Fog2
BS: Base station
OLT: Optical Line Terminal ONU: Optical Network Unit

Figure 6.1: PON based mobile backhaul for FeRAN. © 2019 IEEE (Paper G).

control (MAC) protocol in EPON [32, 33]. Likewise, the migration traffic can be also
queued by their deadline requirements.
PON based mobile backhaul can be enabled by TDM-PON, WDM-PON, and TWDM-
PON, as introduced in Chapter 3. Among them, TWDM-PON has been regarded as a
promising candidate for next generation PON 2 (NG-PON 2), which is, dynamic bandwidth
allocation (DBA) mechanisms are performed on each wavelength for efficient channel
sharing [56]. In a classical DBA algorithm, migration traffic and non-migration traffic are
scheduled with no distinction. Each ONU reports to the OLT the amount of data that needs
to be transmitted in the next cycle and then receives a grant message. According to the
information contained in the grant message, including the allocated time slots and polling
cycle for transmission, each ONU transmits data based on the principle of FCFS without
considering the traffic priorities. As shown in Fig. 6.2, once a service migration occurs, a
large volume of migration data (M1,1) arrives and more than one polling cycle may be
needed for the transmission (see TM 1,1 in Fig. 6.2). In such a case, the non-migration data
(M1,2) that arrives after the migration data has to wait before being transmitted, thus
experiencing a long queuing delay, leading to high latency and jitter (see TN1,2 in Fig. 6.2).

50
Chapter 6 Bandwidth slicing in mobile backhaul networks

TM1,1
TN1,2
Initial report window Polling cycle 1 Polling cylce 2 ... Polling cycle K
OLT G G G G G G Tx
R R R N1,1 M1,1 R N2,1 R N3,1 R M1,1 R ... M1,1 N1,2 R RX

N1,1 M1,1 N1,2 R N1,1 M1,1 R M1,1 R ... M1,1 N1,2 R Tx


ONU1 G G RX
N2,1 R N2,1 R Tx
ONU2 G G RX
N3,1 R N3,1 R Tx
ONU3 G G RX

Figure 6.2: Illustration of conventional DBA. © 2019 IEEE (Paper G).

In order to solve the above problem, the non-migration traffic, what arrives after the
migration traffic included, can be assigned higher priority based upon the existing QoS
management mechanism in Ethernet PON. Here, different traffic has different priorities in
the QoS management mechanism. In such a case, the delay (see TN 1,2 in Fig. 6.3) for the
non-migration traffic can be reduced significantly, whereas short delay for transmitting the
migration traffic may not be guaranteed, particularly when the load of non-migration traffic
is high (see TM 1,1 in Fig. 6.3).

TM1,1
TN1,2
Initial report window Polling cycle 1 Polling cylce 2 Polling cycle K
...
OLT G G G G G G Tx
R R R N1,1 N1,2 R N2,1 R N3,1 R M1,1 R ... M1,1 R RX

N1,1 M1,1 N1,2 R N1,1 N1,2 R M1,1 R ... M1,1 R Tx


ONU1 G G RX
N2,1 R N2,1 R Tx
ONU2 G G RX
N3,1 R N3,1 R Tx
ONU3 G G RX

Figure 6.3: Illustration of DBA prioritizing non-migration traffic. © 2019 IEEE (Paper G).

6.2 Bandwidth slicing mechanism


In this section, we introduce the proposed bandwidth slicing scheme for service migration
in PON based mobile backhaul networks, in which the performance of transmitting the
migration traffic and non-migration traffic can be balanced. In the proposed scheme, the
cycle time can be cut into several slices dynamically, which are provisioned to different
kinds of traffic (i.e., migration traffic and non-migration traffic with different priorities).
Such a mechanism is based on the report-grant mechanism, as introduced in the previous
section. In each polling cycle, request messages are first sent from each ONU to the OLT

51
Chapter 6 Bandwidth slicing in mobile backhaul networks

containing the information about their data size and delay requirements. According to such
information, the polling tables for both non-migration data and migration data are then
updated, which are explained in Paper G. Once service migration occurs, the lengths of the
slices for both migration and non-migration traffic can be calculated by the resource
management controller located at the OLT by the bandwidth allocation algorithms with the
information contained in the polling table.
As shown in Fig. 6.4, according to the considered DBA algorithms, the time slots in
Slice 1 and Slice 2 are allocated to the non-migration traffic and the migration data,
respectively. In the case where there is no migration data to be transmitted, the proposed
mechanism performs in the same way as the classical DBA mechanism. Note that the same
principle as the proposed mechanism applies to both the upstream and downstream.

tolling cycle 1 tolling cycle 2 tolling cycle k


Slice1 Slice2 Slice1 Slice2 ... Slice1 Slice2
hLT GGG Tx
N1,2 N1,1 R N2,1 R N3,1 R M1,1 ... N2,2 R N1,1 R ... M1,1 RX

N1,1 M1,1 N1,2 N1,2 N1,1 R M1,1 N1,1 R M1,1 Tx


hNU1 G RX
N2,1 N2,1 R N2,2 R Tx
hNU2 G RX
N3,1 N3,1 R Tx
hNU3 G RX
R : Report message G : Grant message Nj ,k : k th Non-migrated data in j th hNU Mj ,k : k th migrated data in j th hNU

Figure 6.4: Illustration of the proposed bandwidth slicing mechanism. © 2019 IEEE (Paper G).

6.3 Delay-aware resource allocation algorithm


In this section, a tailored delay-aware resource allocation algorithm is proposed to support
the bandwidth slicing mechanism. Such algorithm is implemented at the end of Slice 1 in
each polling cycle. The flow chart is shown in Fig. 6.5. First, the OLT acquires the
information of the amount of the non-migration traffic that ONUs need to send in the next
polling cycle and their priorities contained in the Report messages. If new migrations occur,
such messages also contain the information of the sizes and the migration deadline of the
migration data that ONUs need to send. Then, the length of the slice for the migration
traffic ( 𝑆𝑆 𝑚𝑚 ) can be calculated by OLT. If there is also migration traffic to be transmitted in
the next polling cycle (e.g., 𝑆𝑆 𝑚𝑚 > 0 ), the migration mechanism will be triggered. The
polling cycle will be divided into two slices for non-migration traffic (Slice 1) and
migration traffic (Slice 2), respectively. As shown in Fig. 6.5, Procedure 1 describes the
resource allocation algorithm for Slice 1, while the resource allocation algorithm for Slice 2
is depicted in Procedure 2. Procedure 1 and Procedure 2 are explained as following.
• Procedure 1: the time slots in Slice 1 are allocated to ONUs for non-migration traffic
according to their requests. The time slots are granted according to their priority level.

52
Chapter 6 Bandwidth slicing in mobile backhaul networks

• Procedure 2: the time slots in Slice 2 are allocated to the migration traffic according to
the ascending order of the deadline for finishing migration.
It should be noted that if there is no migration traffic that needs to be transmitted in the
next polling cycle, only Procedure 1 will be performed without performing bandwidth
slicing mechanism.

Figure 6.5: Flow chart of resource allocation algorithm.

In the proposed algorithm, the calculation of the lengths of slices in each polling cycle
plays a very important role. The key idea is cutting the large-size migration traffic into
small pieces, each consisting of multiple Ethernet frames and to be transmitted before the
required deadline. The symbols used in this section are explained in Table 6.1. In each
𝑗𝑗,𝑘𝑘
polling cycle (e.g., 𝑖𝑖 𝑡𝑡ℎ polling cycle), the required time slots (𝐺𝐺𝑖𝑖 ) for the 𝑘𝑘 𝑡𝑡ℎ migration
traffic from the ONUj can be calculated by

𝑗𝑗,𝑘𝑘 𝑗𝑗,𝑘𝑘 𝑗𝑗,𝑘𝑘


𝐺𝐺𝑖𝑖 = 𝑅𝑅𝑖𝑖 �� 𝑇𝑇𝑖𝑖 �𝑊𝑊𝑚𝑚𝑚𝑚𝑚𝑚 � . (6.1)

53
Chapter 6 Bandwidth slicing in mobile backhaul networks

Table 6.1: Explanation of symbols.

Symbol Explanation

𝑗𝑗,𝑘𝑘 Required time slot for the remaining of the 𝑘𝑘 𝑡𝑡ℎ migration traffic from the ONUj in
𝑅𝑅𝑖𝑖
the 𝑖𝑖 𝑡𝑡ℎ polling cycle

𝐷𝐷 𝑗𝑗,𝑘𝑘 Deadline for the 𝑘𝑘 𝑡𝑡ℎ migration traffic sent by ONUj

𝑗𝑗,𝑘𝑘 Remaining time for transmitting the 𝑘𝑘 𝑡𝑡ℎ migration traffic sent by ONUj , which starts
𝑇𝑇𝑖𝑖
from the beginning of the 𝑖𝑖 𝑡𝑡ℎ polling cycle to its deadline
𝐾𝐾𝑗𝑗 Number of the migration tasks in the ONUj

𝐵𝐵𝑅𝑅 Transmission time for sending each report and grant massage
Guard time with a fixed value for the slice of the migration traffic in the 𝑖𝑖 𝑡𝑡ℎ polling
𝐵𝐵𝐺𝐺
cycle

𝑗𝑗,𝑙𝑙 Length of the requested time slots for the non-migration traffic with the 𝑙𝑙 𝑡𝑡ℎ priority
𝐵𝐵𝑖𝑖
at the in the 𝑖𝑖 𝑡𝑡ℎ polling cycle

𝐻𝐻𝑗𝑗 Number of priority levels for the non-migration traffic at the ONUj

In the proposed resource allocation algorithm, the length of the polling cycle (𝑊𝑊) varies
dynamically with the traffic load. Thus, when calculating the required time slots in the
current polling cycle, the maximum polling cycle (𝑊𝑊𝑚𝑚𝑚𝑚𝑚𝑚 ) is used to guarantee that the
transmission of the whole migration traffic can be finished before the deadline. Here, the
time unit (µs) is used to represent the length of the time slots and polling cycles. Then, the
𝑗𝑗,𝑘𝑘
total length of the time slots granted for the migration traffic (𝑇𝑇𝑇𝑇𝑖𝑖 ) can be calculated by

𝑗𝑗,𝑘𝑘 𝐾𝐾 𝑗𝑗,𝑘𝑘
𝑇𝑇𝑇𝑇𝑖𝑖 = ∑𝑁𝑁 𝑗𝑗
𝑗𝑗=1 ∑𝑘𝑘=1 𝐺𝐺𝑖𝑖 . (6.2)

To guarantee the fairness between the migration and non-migration traffic, the length of the
granted time slots cannot exceed the maximum allowed length of the time slots in this
polling cycle, which can be calculated by

𝑅𝑅𝑖𝑖𝑚𝑚 = (𝑊𝑊𝑚𝑚𝑚𝑚𝑚𝑚 − 𝑁𝑁 × (𝐵𝐵 𝑅𝑅 + 𝐵𝐵 𝐺𝐺 ))× 𝜃𝜃. (6.3)

The maximum length of the allocated time slots for the migration traffic is set by the
threshold (𝜃𝜃) for the slice of the migration traffic (𝜃𝜃 ∈ [0,1]). Thus, the time slots granted
for the migration traffic in the 𝑖𝑖 𝑡𝑡ℎ polling cycle can be calculated by

𝑗𝑗,𝑘𝑘 𝑗𝑗,𝑘𝑘
𝑗𝑗,𝑘𝑘 𝑇𝑇𝑇𝑇𝑖𝑖 , 𝑇𝑇𝑇𝑇𝑖𝑖 < 𝑅𝑅𝑖𝑖𝑚𝑚
𝑇𝑇𝑇𝑇𝑖𝑖 =� 𝑗𝑗,𝑘𝑘
. (6.4)
𝑅𝑅𝑖𝑖𝑚𝑚 , 𝑇𝑇𝑇𝑇𝑖𝑖 ≥ 𝑅𝑅𝑖𝑖𝑚𝑚

54
Chapter 6 Bandwidth slicing in mobile backhaul networks

In the 𝑖𝑖 𝑡𝑡ℎ polling cycle, the length of the slice for the migration traffic (𝑆𝑆𝑖𝑖𝑚𝑚 ) equals to the
𝑗𝑗,𝑘𝑘
total length of the granted time slots ( 𝑇𝑇𝑇𝑇𝑖𝑖 ). Then, for the non-migration traffic (𝐶𝐶𝑖𝑖𝑡𝑡 ) with
different priorities, the total length of the granted time slots can be calculated by
𝐻𝐻 𝑗𝑗,𝑙𝑙
𝐶𝐶𝑖𝑖𝑡𝑡 = ∑𝑁𝑁 𝑗𝑗
𝑗𝑗=1 ∑𝑙𝑙=1 𝐵𝐵𝑖𝑖 . (6.5)

For the non-migration traffic, the maximum available time slot (𝐶𝐶𝑖𝑖𝑎𝑎 ) can be calculated by

𝐶𝐶𝑖𝑖𝑎𝑎 = 𝑊𝑊𝑚𝑚𝑚𝑚𝑚𝑚 –𝑆𝑆𝑖𝑖𝑚𝑚 − 𝑁𝑁 × (𝐵𝐵 𝑅𝑅 + 𝐵𝐵 𝐺𝐺 ) . (6.6)

Then, the granted time slots can be calculated by

𝐶𝐶𝑖𝑖𝑡𝑡 , 𝐶𝐶𝑖𝑖𝑡𝑡 < 𝐶𝐶𝑖𝑖𝑎𝑎


𝐶𝐶𝑖𝑖𝑡𝑡 = � 𝑎𝑎 . (6.7)
𝐶𝐶𝑖𝑖 , 𝐶𝐶𝑖𝑖𝑡𝑡 ≥ 𝐶𝐶𝑖𝑖𝑎𝑎

Similarly, in the 𝑖𝑖 𝑡𝑡ℎ polling cycle, the length of the slice for the non-migration traffic (𝑆𝑆𝑖𝑖𝑛𝑛 )
equals the total length of the granted time slots (𝐶𝐶𝑖𝑖𝑡𝑡 ).

6.4 Performance evaluation


In this section, the performance of the proposed algorithm is investigated with the simulator
described in Chapter 2. The parameters are explained in Table 6.2. The performance of the
proposed scheme is also further compared with two benchmarks that are based on the
conventional DBA algorithms [57]. The migration traffic and non-migration traffic follow
FCFS in Benchmark 1, while in Benchmark 2 a higher priority is given to the non-
migration traffic. Besides, in both benchmarks, the non-migration traffic is assumed with
two priority levels (e.g., low and high).

Table 6.2: Simulation parameters.

Parameter Value

Number of ONUs in a PON 8


Propagation delay in the optical links 5µs/km
Packet size of Ethernet frame ( bytes) [64,1518]
Guard time between two consecutive time slots 1µs
Buffer size (Mbytes) 100
Amount of data encapsulated in application VMs (Mbytes) [10,50]
Deadline for the migration data (second) [1,5]
Confidence level 95%

55
Chapter 6 Bandwidth slicing in mobile backhaul networks

Simulation results in Paper G show that time slots may be monopolized by the
migration traffic in consecutive polling cycles when traffic load is high, if the threshold of
the time slots for migration traffic (θ) is not well set. For example, if θ is set to 1, 85% of
the time slots in the overall polling cycle are allocated to the migration traffic at load = 0.9.
As shown in Fig. 6.6, the migration success probability (MSP) decreases with traffic
load in two benchmarks and the proposed scheme (named DBS) with different thresholds.
Here, MSP is defined as the ratio of the amount of services migrated before the required
deadline over the total amount of services that are migrated. It can be seen that MSP for all
three schemes is very high at low traffic load (e.g., less than 0.4) and decreases as load
increasing when the traffic load is high (e.g., higher than 0.4). Among three schemes,
Benchmark 1 performs the best. This is because according to the principle of FCFS adopted,
large-size migration traffic can be fully transmitted in several cycles once the migration
starts. For example, MSP is almost 1 when traffic load is 0.9. In Benchmark 2, the
transmission of non-migration traffic is prioritized, therefore, when the traffic load is high,
the MSP for the migration traffic decreases sharply. For example, MSP is as low as 0.1
when the traffic load is 0.9. While in the proposed scheme, all the migration tasks can be
performed within the time constraints when the traffic load is under 0.5. When the traffic
load is higher than 0.6, the MSP performance is mainly affected by the threshold of the
allowable timeslots that can be used for the migration traffic, and increases as the threshold
increasing. For example, with the threshold set to 0.5, MSP can be up to 0.98 at load of 0.7.

0.8
Migration success probability

0.6

Benchmark1
0.4 Benchmark2
DBS(Threshold=0.3)
DBS(Threshold=0.5)
0.2 DBS(Threshold=0.7)

0
0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

Load

Figure 6.6: The migration success probability versus traffic load © 2019 IEEE (Paper G).

When the traffic load is low, MSP for three schemes is the same while their migration
delays may be different, as shown in Fig. 6.7. Here, the migration delay consists of
communication delay between the source and the target fog nodes and processing time at

56
Chapter 6 Bandwidth slicing in mobile backhaul networks

the fog nodes. In our proposed scheme, the data processing at the fog nodes is not affected.
In this regard, only communication delay is considered in this simulation, which includes
upstream and downstream delays as well as switching and buffering time at the OLT in
PON backhaul networks. As shown in Fig. 6.7, the average migration delay in Benchmark
1 performs the lowest under all loads among three schemes. Compared to Benchmark 1, the
average migration delay in Benchmark 2 maintains a little high at low load and
dramatically increases when the load is high. While for the DBS, the average migration
delay is a constant with a value of 3 seconds and equals to the average value of the deadline
requirements. Such results further verify the effectiveness of the proposed scheme, that is,
the impact of migration on non-migration traffic latency can be reduced by partitioning the
large-size migration traffic into smaller pieces with deadline constraints.

Benchmark1
7
Benchmark2
DBS(Threshold=0.3)
6
DBS(Threshold=0.5)
DBS(Threshold=0.7)
Migration delay(s)

1
0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

Load

Fig.6.7: The average migration delay versus traffic load © 2019 IEEE (Paper G).

The average E2E latency for the non-migration traffic with high priority is shown in Fig.
6.8(a) as a function of load. It can be seen that the proposed scheme and two benchmarks
have a similar trend, that is, the average latency increases with traffic load. Among three
schemes, Benchmark 1 always has the highest average E2E latency which can be up to 100
ms when traffic load is 0.9. Such large latency may not be accepted for time-critical
services (e.g., interactive voice). Compared with Benchmark 1, the average E2E latency in
Benchmark 2 increases more slowly even when the traffic load is high. The reason is that
the non-migration traffic in Benchmark 2 has high priority to be transmitted. Compared
with Benchmark 1 and Benchmark 2, the average E2E latency for DBS is much lower,
which is less than 1 ms when the traffic load is lower than 0.5. It can still be less than 10 ms
even at high traffic load. This is due to the fact that the migration traffic can be transmitted
in multiple cycles by partitioning those with large size into multiple smaller pieces, thereby

57
Chapter 6 Bandwidth slicing in mobile backhaul networks

the non-migration traffic that arrives before or during the transmission of migration traffic
does not need to wait too long for transmission. Furthermore, when the threshold increases,
the allocated time slots for transmitting the non-migration traffic decreases, thus the
average E2E latency increases. As shown in Fig. 6.8(b), the jitter for the non-migration data
with high priority has a similar trend as the average E2E latency.

3 Benchmark1
10
Benchmark2
DBS(Threshold=0.3)

2 DBS(Threshold=0.5)
10
traffic with high priority(ms)

DBS(Threshold=0.7)
Latency for non-migration

1
10

0
10

-1
10
0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
(a) Load

Benchmark1
3
10 Benchmark2
DBS(Threshold=0.3)
DBS(Threshold=0.5)
2
10 DBS(Threshold=0.7)
Jitter for non-migration traffic
with high priority(ms)

1
10

0
10

-1
10
0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

(b) Load

Figure 6.8: (a) The average latency and (b) jitter for the non-migration data with high priority.
© 2019 IEEE (Paper G).

The average E2E latency of the low-priority non-migration data with different traffic
loads is shown in Fig. 6.9(a). Benchmark 1 still has the highest average E2E latency for the
non-low-priority migration data among the three schemes. Compared with Benchmark 1,
the average E2E latency in Benchmark 2 is much lower. Meanwhile, when traffic load is
low (e.g., less than 0.6), the average latency of the low-priority non-migration traffic in

58
Chapter 6 Bandwidth slicing in mobile backhaul networks

DBS is the lowest, which is smaller than 2 ms. It increases quickly with larger thresholds
(e.g., less than 0.5), and exceeds the level observed for Benchmark 2 when traffic load is
high. The reason is that the time slots are prioritized for the non-migration traffic with high
priority and the migration traffic, thus the low-priority non-migration has to wait. When the
traffic load is low, all kinds of traffic can be assigned with sufficient time slots. When the
traffic load becomes larger, the average E2E latency for the low-priority non-migration
traffic increases sharply because of its large queueing delay.

Benchmark1
3
10
Benchmark2
DBS(Threshold=0.3)

2 DBS(Threshold=0.5)
10
DBS(Threshold=0.7)
traffic with low priority(ms)
Latency for non-migration

1
10

0
10

-1
10
0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
(a) Load

Benchmark1
3
10 Benchmark2
DBS(Threshold=0.3)
DBS(Threshold=0.5)
2
10 DBS(Threshold=0.7)
Jitter for non-migration traffic
with low priority(ms)

1
10

0
10

-1
10
0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

(b) Load

Fig. 6.9: The average (a) latency and (b) jitter for the non-migration data with low priority.
© 2019 IEEE (Paper G).

59
Chapter 6 Bandwidth slicing in mobile backhaul networks

6.5 Summary
In this chapter, we present a novel bandwidth slicing mechanism in PON based backhaul
networks, in which large-size migration data can be cut into small pieces and transmitted in
multiple polling cycles. Simulation results indicate that our proposed scheme can well
handle the transmission of migration traffic and non-migration traffic under their QoS
requirements.
As also noticed, in the proposed bandwidth slicing mechanism for service migration, the
latency of non-migration traffic with low priority may increase sharply when traffic load is
high. To solve this problem, both bandwidth and wavelength allocation algorithm should be
investigated to further reduce the latency for the non-migration traffic with low priority,
which forms part of the future research.

60
Chapter 7

Conclusions and future work


This chapter summarizes the main contributions of this thesis and draws some concluding
remarks. Then, we highlight a few directions for the future works.

7.1 Conclusions
This thesis investigates how to realize ultra-low latency communications in 5G transport
networks to support time-critical services, especially for real-time vehicular services.
First, a novel PON based mobile backhaul architecture and its tailored communication
protocols are designed to enhance the connectivity among any adjacent BSs. In the
proposed architecture, the communication among BSs belonging to not only the same PON
but also different PONs can be realized directly without involvement of central office.
Simulation results show that an extremely low latency (less than 1 ms packet delay) for
inter-BS communications can be achieved, which can be used to support fast handover for
users with high mobility (e.g., vehicles).
In the context of FeCN, to maintain the service continuity with stringent service
requirements, high mobility is one of the most critical challenges for certain users. Service
migration has been regarded as a promising solution to the mobility problem. However,
service migration cannot be completed immediately, which may lead to the situation where
users experience loss of service access.
To solve the above issue, a QoS-aware service migration strategy based on the existing
handover procedures is proposed to balance the benefits and costs of migration. A case
study using a realistic vehicle mobility pattern for Luxembourg scenario is carried out
through simulation to evaluate the performance of the proposed schemes. Results show that
low end-to-end latency (e.g., 10 ms) for vehicular communication can be achieved, while
the total number of migrations for each user in the whole journey can be decreased
significantly.

61
Chapter 7 Conclusions and future work

To support the migrated services, a distributed fog computing resource management


scheme based on the LP services selection algorithm and the neighboring BS-Fog is
proposed, in which real-time services are allocated with resource with high priority,
especially when the traffic load is high. Meanwhile, the LP services can be migrated to
other fog nodes with low load. Simulation results show that the one-hop access probability
for HP services increases significantly, while the service unavailability for LP services can
also be well reduced.
During service migration, both the traffic generated by migration and other traffic (e.g.,
control information, video) are transmitted via mobile backhaul networks. To balance the
performance of the two kinds of traffic, we propose a delay-aware bandwidth slicing in
PON based mobile backhaul networks. Simulation results show that migration data can be
transmitted successfully in a required time threshold, while the requirements of latency and
jitter for non-migration traffic with different priorities can be well satisfied.

7.2 Future work


The proposed PON based mobile architecture enables ultra-low latency communications
among BSs via connecting the splitters belonging to different PONs with fibers. On the
other hand, deploying extra fibers among the splitters may increase the cost. In future
works, we will investigate how to connect the splitters in a cost-efficiency way under the
latency requirements.
As discussed, service migration is a promising solution to keep low communication
latency for real-time services. Meanwhile, it may induce a short-time service interruption
during migration procedure as well as communication and computation overhead. That
means there is a tradeoff between the benefit and cost of migration. To balance this
tradeoff, a QoS-aware service migration strategy based on the handover procedure is
proposed, in which service migration is triggered once the QoS (e.g., latency) requirements
cannot be satisfied. The key idea of this strategy is to minimize the migration frequency
under the QoS requirements of affected services. In fact, when the decision of service
migration is made, expected QoS requirements and other factors (e.g., network and
computation condition) need also to be considered. Thus, smart service migration
strategies, including the optimal selections of target fog nodes and migration paths as well
as migration triggering mechanism, will be investigated, especially using advanced
machine learning technologies (e.g., deep reinforcement learning). On the other hand, once
the migration decision is made, the technique of live VM migration is used, during which
the services cannot be properly accessed. Thus, advanced live VM migration techniques
will be investigated to further reduce the negative effects on the migrated services.
In the proposed fog computing resource management scheme, computing resources are
allocated to the real-time vehicular services with the highest priority. When traffic load of
the target BS-Fog is high, LP services have to be migrated to its neighboring BS-fogs. In
this thesis, a selection algorithm of LP services and a selection algorithm of the neighboring
BS-fogs to host the migrated LP services are presented. The principle of the proposed LP
services selection algorithm is to minimize the communication overhead caused by

62
Chapter 7 Conclusions and future work

migration. However, some LP services may also be time-sensitive. Designing another LP


services selection algorithm by considering both communication costs and QoS
requirements of LP services may be a target for a future study.
In the proposed bandwidth slicing mechanism for service migration, the latency of non-
migration traffic with low priority may increase sharply when traffic load is high. To solve
this problem, both bandwidth and wavelength allocation algorithm will be investigated to
further reduce the latency for the non-migration traffic with low priority.

63
Summary of the original works

Paper A: J. Li, J. Chen, “Optical transport network architecture enabling ultra-low


latency for communications among base stations” IEEE/OSA Optical Fiber
Communication Conference (OFC), Mar. 19-23, 2017.
In this paper, a novel optical transport network architecture is designed,
which enables ultra-low latency (less than 1ms) communications among base
stations.
Contribution of the author: Design the network architecture, development
of simulators, analysis and representation, preparation of first draft and
updated version.

Paper B: J. Li, J. Chen, “Passive optical network based mobile backhaul enabling
ultra-Low latency for communications among base stations” IEEE/OSA
Journal of Optical Communication and Networks, vol. 9, Issue 10, pp. 855-
863, Oct. 2017.
This paper is the extension of Paper A, in which the designed network
architecture, its tailored communication protocol and resource allocation
algorithms are explained thoroughly. In addition, more detailed performance
analysis results are contained.
Contribution of the author: Design the network architecture, communication
protocol and resource allocation algorithm, development of simulator,
analysis and representation, preparation of first draft and updated version.

Paper C: J. Li, X. Shen, L. Chen, D. Van, J. Ou, L. Wosinska, and J. Chen, “Service
migration schemes in fog computing enabled cellular network to support
real-time vehicular services,” IEEE Access, vol. 7, pp. 13704-13714, Feb.,
2019.
This paper proposes a QoS aware migration scheme based on existing
handover procedures to support real-time vehicular services. A case study
based on a realistic vehicle mobility pattern for Luxembourg is carried out,
where three migration schemes are compared by analyzing end-to-end
latency profile, reliability.
Contribution of the author: Design the migration strategy, development of
simulator, analysis and representation, preparation of first draft and updated
version.

Paper D: J. Li, C. Natalino, D. Van, L. Wosinska, and J. Chen, “Resource management


in fog enhanced radio access network to support real-time vehicular
services,” IEEE Information Conference on Fog and Edge computing,
Madrid, May, 2017.
This paper proposes two distributed fog computing resource management

65
Summary of the original works

schemes to support the migration of real-time services.


Contribution of the author: Design the resource management algorithms,
development of simulator, analysis and representation, preparation of first
draft and updated version.

Paper E: J. Li, C. Natalino, X. Shen, L. Chen, J. Ou, L. Wosinska, and J. Chen,


“Online resource management in fog enhanced cellular network for real-time
vehicular services,” Applied Sciences. (Manuscript)
This paper proposes the third fog computing resource management scheme
which improve the performance of the two schemes proposed in Paper D.
Simulation results show that the one-hop access probability for both high-
priority and low-priority can be improved.
Contribution of the author: Design the resource management algorithms,
development of simulator, analysis and representation, preparation of first
draft and updated version.

Paper F: J. Li, L. Wosinska, and J, Chen, “Dynamic bandwidth slicing for service
migration in passive optical network based mobile backhaul networks”
IEEE/OSA Asia Communication and Photonics Conference, Nov. 2017.
This paper proposes a bandwidth slicing mechanism in PON based backhaul
to support service migration. Simulation results show that the performance of
migration traffic and non-migration traffic can be balanced.
Contribution of the author: Design the bandwidth slicing mechanism,
development of simulator, analysis and representation, preparation of first
draft and updated version.

Paper G: J. Li, X. Shen, L. Chen, J. Ou, L. Wosinska, and J. Chen, “Delay-aware


network slicing for service migration in mobile backhaul” IEEE/OSA
Journal of Optical Communication and Networks, vol. 11, issue 4, pp. B1-
B9, Apr., 2019.

This paper is the extension of Paper VI, in which the bandwidth slicing
mechanism and its tailored resource allocation are explained thoroughly. In
addition, more detailed performance analysis results are contained.

Contribution of the author: Design the bandwidth slicing mechanism and


resource allocation algorithm, development of simulator, analysis and
representation, preparation of first draft and updated version.

66
Biography
[1] 3GPP TS 122261, “Service requirements for next generation new services and
markets”; V 15.5.0, Mar., 2019.
[2] I. Parvez, A. Rahmati, I. Guvenc, A. I. Sarwat, and H. Dai, “A Survey on Low
Latency Towards 5G: RAN, Core Network and Caching Solutions,” IEEE
Communications Surveys & Tutorials, vol. 20, no. 4, pp. 3098 – 3130, May, 2018.
[3] Nokia, “5G for Mission Critical Communication,” White Paper, 2016.
[4] M. Chiang, and T. Zhang, “Fog and IoT: An Overview of Research Opportunities,”
IEEE Internet of Things Journal, vol. 3, no. 6, pp. 854-864, Dec., 2016.
[5] 3GPP, TR 22.885, “Study on LTE Support for V2X Services”; V14.0.0, Dec. 2015.
[6] 3GPP, TR 22.886, “Study on Enhancement of 3GPP Support for 5G V2X”; v16.2.0,
Dec., 2018.
[7] A. Ashok, P. Steenkiste, and F. Bai, “Adaptive Cloud Offloading for Vehicular
Applications,” IEEE Vehicular Networking Conference (VNC), Columbus, OH, pp. 1-
8, 2016.
[8] M. Whaiduzzaman, M. Sookhak, A. Gani, and R. Buyya, “A Survey on Vehicular
Cloud Computing,” Journal of Network and Computer Application, vol. 40, pp. 325-
344, Apr., 2014.
[9] D. Han, S. Shin, H. Cho, J. Chung, D. Ok, and I. Hwang, “Measurement and
stochastic modeling of handover delay and interruption time of smartphone real-time
applications on LTE Networks,” IEEE Communications Magazine, vol. 53, no. 3, pp.
173-181, Mar., 2015.
[10] P. Iovanna, G. Bottari, F. Ponzini, and L. M. Contreras, “Latency-driven transport for
5G, ” IEEE/OSA Journal of Optical Communications and Networking, vol. 10, no. 8,
pp. 695-702, Aug., 2018.
[11] C. L. I, H. Li, J. Korhonen, J. Huang, and L. Han, “RAN Revolution With NGFI
(xhaul) for 5G," IEEE/OSA Journal of Lightwave Technology, vol. 36, no. 2, pp. 541-
550, Jan.15, 2018.
[12] G. Brown, “New transport network architecture for 5G RAN,” White paper, Sep.,
2018.
[13] J. Bartelt, N. Vucic, D. Camps-Mur, E. Garcia-Villegas, I. Demirkol, A. Fehske, M.
Grieger, A. Tzanakaki, J. Gutiérrez, E. Grass, G. Lyberopoulos, and G. Fettweis, “5G
transport network requirements for the next generation fronthaul interface,” EURASIP
Journal on Wireless Communication and Networking, Dec., 2017

67
Biography

[14] 3GPP, TR 25.912, “Feasibility Study for Evolved Universal Terrestrial Radio Access
(UTRA) and Universal Terrestrial Radio Access Network (UTRAN),” V 15.0.0, Jul.,
2018.
[15] D. Lee, H. Seo, B. Clerckx, E. Hardouin, D. Mazzarese, S. Nagata, and K. Sayana,
“Coordinated multipoint transmission and reception in LTE-advanced: deployment
scenarios and operational challenges,” IEEE Communications Magazine, vol. 50, no.
2, pp. 148-155, Feb., 2012.
[16] J. S. Wey, and J. Zhang, “Passive Optical Networks for 5G Transport: Technology
and Standards,” IEEE/OSA Journal of Lightwave Technology, Jul., 2018.
[17] M. Fiorani, P. Monti, B. Skubic, J. Martensson, L. Valcarenghi, P. Castoldi, and L.
Wosinska, “Challenges for 5G transport networks,” IEEE International Conference
on Advanced Networks and Telecommuncations Systems (ANTS), 2014.
[18] T. Orphanoudakis, E. Kosmatos, J. Angelopoulos, and A. Stavdas, “Exploiting PONs
for Mobile Backhaul,” IEEE Communications Magazine, vol. 51, no. 2, pp. 27-34,
Feb., 2013.
[19] S. Wang, J. Xu, N. Zhang, and Y. Liu, “A Survey on Service Migration in Mobile
Edge Computing, ” IEEE Access, vol. 6, pp. 23511-23528, 2018.
[20] N. Lu, N. Cheng, N. Zhang, X. Shen, and J. W. Mark, “Connected Vehicles: Solutions
and Challenges,” IEEE Internet of Things Journal, vol. 1, no. 4, pp. 289-299, Aug.,
2014.
[21] B. Campolo, A. Molinaro, A. Iera, and F. Menichella, “5G Network Slicing for
Vehicle-to-Everything Services,” IEEE Wireless Communications, vol. 24, no. 6, pp.
38-45, Dec., 2017.
[22] P. Popovski et al., “Wireless Access for Ultra-Reliable Low-Latency Communication:
Principles and Building Blocks,” IEEE Network, vol. 32, no. 2, pp. 16-23, March-
April, 2018.
[23] L. Codeca, R. Frank, and T. Engel, “Luxembourg SUMO Traffic (LuST) Scenario: 24
hours of mobility for vehicular networking research,” IEEE Vehicular Networking
Conference (VNC), Kyoto, 2015, pp. 1-8.
[24] F. Effenberger, D. Cleary, O. Haran, G. Kramer, R. D. Li, M. Oron, and T. Pfeiffer ,
“An introduction to PON technologies,” IEEE Communications Magazine, vol. 45,
no. 3, pp. S17-S25, Mar., 2007.
[25] K. Grobe, and J. Elbers, “PON in adolescence: from TDMA to WDM-PON,” IEEE
Communications Magazine, vol. 46, no. 1, pp. 26-34, Jan. 2008.
[26] Y. Luo, X. Zhou, F. Effenberger, X. Yan, G. Peng, Y. Qian, and Y. Ma, “Time-and
wavelength-Division multiplexed passive optical network (TWDM-PON) for next-

68
Biography

generation PON state 2(NG-PON2),” IEEE Journal of Lightwave Technology, vol.


31, no. 4, pp. 587-593, Feb., 2013.
[27] E. Mallette, “Mobile backhaul for PON: a case study,” IEEE/OSA Optical Fiber
Communications Conference and Exhibition (OFC), Mar. 2012.
[28] 3GPP, TS 36.423 v9.4.0, “X2 application protocol (X2AP),” Sep. 2010.
[29] X. Hu, C. Ye, and K. Zhang, “Converged mobile fronthaul and passive optical
network based on hybrid analog-digital transmission scheme,” IEEE/OFC Optical
Fiber Communications Conference and Exhibition (OFC), Mar. 2016.
[30] T. Pfeiffer, “Converged heterogeneous optical metro-access networks,” IEEE/OSA
European Conference on Optical communication (ECOC), Sep. 2010.
[31] C. Choi, Q. Wei, T. Biermann, and L. Scalia, “ Mobile WDM backhaul access
network with physical inter-base-station links for coordinated multipoint
transmission/reception System,” IEEE Global Telecommunications Conference
(GLOBECOM), Dec. 2011.
[32] G. Kramer, Ethernet Passive Optical Networks. New York: McGraw-Hill, 2005.
[33] “10 Gb/s Ethernet passive optical network,” IEEE P802.3av Task Force [Online].
Available: http://www.ieee802.org/3/av/index.html, 2009.
[34] K. Taguchi, K. Asaka, M. Fujiwara, S. Kaneko, T. Yoshida, Y. Fujita, H. Iwamura, M.
Kashima, S. Furusawa, M. Sarashina, H. Tamai, A. Suzuki, T. Mukojima, S. Kimura,
K. Suzuki, and A. Otaka, “Field trial of long-reach and high-splitting λ- tunable
TWDM-PON,” IEEE Journal of Lightwave Technology, vol. 34, no. 1, pp. 213-221,
Jan. 2016.
[35] I. Widjaja and H. L. Roche, “Sizing X2 bandwidth for inter-connected eNBs,” VTC,
Anchorage, AK, Sep. 2009.
[36] ICT-317669 METIS Project, “Updated scenarios, requirements and KPIs for 5G
mobile and wireless system with recommendations for future investigations,” Del.
D1.5, Apr. 2015.
[37] NGMN Alliance, “5G white paper,” White Paper, Feb. 2015. Available:
https://www.ngmn.org/fileadmin/ngmn/content/images/news/ngmn_news/NGMN_5
G_White_Paper_V1_0.pdf
[38] Y. Y. Shih, W. H. Chung, A. C. Pang, T. C. Chiu, and H. Y. Wei, “Enabling Low-
Latency Applications in Fog-Radio Access Networks,” IEEE Network, vol. 31, no. 1,
pp. 52-58, Jan., 2017.
[39] Y. Ku et al., “5G Radio Access Network Design with the Fog Paradigm: Confluence
of Communications and Computing,” IEEE Communications Magazine, vol. 55, no.
4, pp. 46-52, Apr., 2017.

69
Biography

[40] Y. Yu, T. Chiu, A. Pang, M. Chen, and J. Liu, “Virtual machine placement for
backhaul traffic minimization in fog radio access networks,” IEEE International
Conference on Communications (ICC), Paris, 2017, pp. 1-7.
[41] L. F. Bittencourt, M. M. Lopes, I. Petri, and O. F. Rana, “Towards Virtual Machine
Migration in Fog Computing,” IEEE International Conference on P2P, Parallel,
Grid, Cloud and Internet Computing (3PGCIC), pp. 1-8, Krakow, 2015.
[42] A. Machen, S. Wang, K. K. Leung, B. J. Ko, and T. Salonidis, “Live Service
Migration in Mobile Edge Clouds,” IEEE Wireless Communications, vol. 25, no. 1,
pp. 140-147, Feb., 2018.
[43] K. Ha, Y. Abe, Z. Chen, W. Hu, B. Amos, and P. Pillai., “Adaptive VM handoff
across cloudlets,” School Comput. Sci., Carnegie Mellon Univ., Pittsburgh, PA,
USA, Tech. Rep. CMU-CS15-113, 2015.
[44] T. Taleb, A. Ksentini, and P. Frangoudis, “Follow-Me Cloud: When Cloud Services
Follow Mobile Users,” IEEE Transactions on Cloud Computing, Feb. 2016.
[45] R. Urgaonkar, S. Wang, T. He, M. Zafer, K. Chan, and K. K. Leung, “Dynamic
Service Migration and Workload Scheduling in Edge-clouds,” ACM Journal of
Performance Evaluation, Sep., 2015.
[46] S. Wang, R. Urgaonkar, T. He, K. Chan, M. Zafer, and K. K. Leung, “Dynamic
Service Placement for Mobile Micro-Clouds with Predicted Future Costs,” IEEE
Transactions on Parallel and Distributed Systems, vol. 28, no. 4, pp. 1002-1016, 1
Apr., 2017.
[47] 3GPP, TR 23.786, “Study on Architecture Enhancement for EPS and 5G System to
Support for Advanced V2X Services,” V. 16.0.0, Mar., 2019.
[48] R. Yu, J. Ding, X. Huang, M. Zhou, S. Gjessing, and Y. Zhang, “Optimal resource
sharing in 5G-enabled vehicular network: A matrix game approach,” IEEE
Transaction on Vehicular Technology, vol. 65, no. 10, pp. 7844-7856, Oct., 2016.
[49] M. Jia, W. Liang, Z. Xu, and M. Huang, “Cloudlet load balance in wireless
metropolitan area Network,” IEEE Conference on Computer Communications
(INFOCOM), Apr., 2016.
[50] Y. Xiao, and M. Krunz, “QoS and power efficiency tradeoff for fog computing
networks with fog node cooperation,” IEEE Conference on Computer
Communications (INFOCOM), Apr., 2017.
[51] W. Zhang, Z. Zhang, and H. Chao, “Cooperative fog computing for dealing big data
in the internet of vehicles: architecture and hierarchical resource management,” IEEE
Communication Magazine, vol. 55, no. 12, pp.60-67, Dec., 2017.

70
Biography

[52] R. Morabito, V. Cozzolino, A. Y. Ding, N. Beijar, and J. Ott, “Consolidate IoT Edge
Computing with Lightweight Virtualization, ”IEEE Network, vol. 32, no. 1, pp. 102-
111, Feb., 2018.
[53] N. Yamanaka, S. Okamoto, M. Hirono, Y. Imakiire, W. Muro, T. Sato, E. Oki, A.
Fumagalli, and M. Veeraraghavan, “Application-Triggered Automatic Distributed
Cloud/Network Resource Coordination by Optically Networked Inter/Intra Data
Center,” IEEE/OSA Journal of Optical Communications and Networking, vol.10,
B15-B24, 2018.
[54] H. Zhang, K. Chen, W. Bai, D. Han, C. Tian, H. Wang, H. Guan, and M. Zhang,
“Guaranteeing Deadlines for Inter-Data Center Transfers,” IEEE/ACM Transactions
on Networking, vol. 25, no. 1, pp. 579-595, Feb., 2017.
[55] W. Kmiecik, and K. Walkowiak, “Dynamic Overlay Multicasting for Deadline-driven
Requests Provisioning in Elastic Optical Networks,” IEEE Conference on
Innovations in Clouds, Internet and Networks (ICIN), pp. 31-35, Paris, 2017.
[56] A. Dixit, B. Lannoo, D. Colle, M. Pickavet, and P. Demeester, “Energy Efficient
DBA Algorithms for TWDM-PONs,” IEEE International Conference on Transparent
Optical Networks (ICTON), pp. 1-5, Budapest, 2015.
[57] G. Kramer, B. Mukherjee, and G. Pesavento, “IPACT: a Dynamic Protocol for an
Ethernet PON (EPON),” IEEE Communications Magazine, vol. 40, no. 2, pp. 74-80,
Feb., 2002.

71
Paper A

Optical transport network architecture enabling


ultra-low latency for communications among base
stations
Jun Li, Jiajia Chen

Published in

IEEE/OSA Optical Fiber Communication Conference (OFC), March 19-23,


2017.

© 2017 IEEE
Paper B

Passive optical network based mobile backhaul


enabling ultra-Low latency for communications
among base stations
Jun Li, Jiajia Chen

Published in

IEEE/OSA J. Opt. Commune. Netw. vol. 9, Issue 10, pp. 855-863, Oct. 2017.

© 2017 IEEE
Paper C

Service migration schemes in fog computing enabled


cellular network to support real-time vehicular
services
Jun Li, Xiaoman Shen, Lei Chen, Dung Pham Van, Jiannan Ou,
Lena Wosinska, Jiajia Chen

Published in

IEEE Access, vol. 7, pp. 13704-13714, Jan., 2019.

© 2019 IEEE
Paper D

Resource management in fog enhanced radio access


network to support real-time vehicular services
Jun Li, Carlos Natalino, Dung Pham Van, Lena Wosinska, Jiajia
Chen

Published in

IEEE Information Conference on Fog and Edge computing, Madrid, May,


2017

© 2017 IEEE
Paper E

Online resource management in fog enhanced radio


access network to support real-time vehicular
services

Jun Li, Carlos Natalino, Xiaoman Shen, Lei Chen, Jiannan Ou,
Lena Wosinska, Jiajia Chen

Applied Sciences (Manuscript), 2019


Paper F

Dynamic bandwidth slicing for service migration in


passive optical network based mobile backhaul

Jun Li, Lena Wosinska, Jiajia Chen

Published in
IEEE/OSA Asia Communication and Photonics Conference, Nov. 2017

© 2017 IEEE
Paper G

Delay-aware bandwidth slicing for service migration


in mobile backhaul networks

Jun Li, Xiaoman Shen, Lei Chen, Jiannan Ou, Lena Wosinska,
Jiajia Chen

Published in
IEEE/OSA J. Opt. Commun. Netw., vol. 11, issue 4, pp. B1-B9, Apr. 2019.

© 2019 IEEE

You might also like