Professional Documents
Culture Documents
Cloudlets Activation Scheme For Scalable Mobile
Cloudlets Activation Scheme For Scalable Mobile
Cloudlets Activation Scheme For Scalable Mobile
fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TC.2018.2818144, IEEE
Transactions on Computers
IEEE TRANSACTIONS ON COMPUTERS, VOL. ??, NO. ?, ? 201? 1
Abstract—Mobile devices have several restrictions due to design choices that guarantee their mobility. A way of surpassing such
limitations is to utilize cloud servers called cloudlets on the edge of the network through Mobile Edge Computing. However, as the
number of clients and devices grows, the service must also increase its scalability in order to guarantee a latency limit and quality
threshold. This can be achieved by deploying and activating more cloudlets, but this solution is expensive due to the cost of the
physical servers. The best choice is to optimize the resources of the cloudlets through an intelligent choice of configuration that lowers
delay and raises scalability. Thus, in this paper we propose an algorithm that utilizes Virtual Machine Migration and Transmission Power
Control, together with a mathematical model of delay in Mobile Edge Computing and a heuristic algorithm called Particle Swarm
Optimization, to balance the workload between cloudlets and consequently maximize cost-effectiveness. Our proposal is the first to
consider simultaneously communication, computation, and migration in our assumed scale and, due to that, manages to outperform
other conventional methods in terms of number of serviced users.
Index Terms—Mobile Edge Computing, cloudlet, scalability, resource management, virtualization, Transmission Power Control,
Particle Swarm Optimization.
1 I NTRODUCTION
0000–0000/00/$00.00
c 2017 IEEE Published by the IEEE Computer Society
0018-9340 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TC.2018.2818144, IEEE
Transactions on Computers
IEEE TRANSACTIONS ON COMPUTERS, VOL. ??, NO. ?, ? 201? 2
the Backhaul Delay comes into effect whenever cloudlets lay, Processing Delay, and Backhaul Delay. Because all three
need to communicate amongst each other (e.g., when a components occur in different independent mediums (i.e.,
user sends the job to a cloudlet that does not host the the wireless medium between the user and the connected
VM server but offers a better communication environment); cloudlet, inside the cloudlet that hosts the VM server, and
this exchange takes place in a backhaul physical link that the backhaul link connecting all physical cloudlets), a simple
connects all cloudlets, and the delay is the measure of how course of action is to deal with them separately by focusing
long such communication takes to be executed. It is clear on a single component and trying to more intelligently
that Service Delay is the sum of these three classes of delay. utilize the resources related to it. This simultaneously lowers
Moreover, because of the need to maintain transparency, it the respective delay and increases the respective scalability
is important for MEC to provide a guaranteed threshold (i.e., the number of users that can be serviced under a deter-
of Service Delay, where users (or even service providers) mined latency threshold without activating new cloudlets).
can expect their jobs to finish under that time limit. This Indeed, this strategy of focusing on a single component of
ensures not only maximum latency but also a minimum the latency can be seen in the literature.
Quality of Service (QoS). This is specially true for real-time Because all VM servers in the same cloudlet have to
applications [4]. share the same resource pool, Processing Delay tends to
The main obstacle to this requirement is dynamic sce- grow along with the number of VMs contending for such
narios where MEC must work. For example, MEC is very resources (in an effect called multi-tenancy [11], [12]). Thus,
attractive in situations where a high number of clients are if efforts are made to avoid cloudlets hosting more VM
expected, such as sports events or academic conferences [5]. servers than necessary, it should be effective in managing
With MEC, all attendees of such events would have access Processing Delay and improving computation scalability. As
to powerful cloud computing resources on their mobile a result, balancing the amount of VM servers hosted by each
devices. However, this type of benefit is accompanied by a cloudlet to intelligently use computation resources is often
need for scalability, because if the cloudlets are overloaded seen in the literature. Roy et al. [13], and Oueis et al. [14]
with requests, Service Delay and QoS will break their re- balanced the workload by always setting up the VM servers
quired thresholds and service transparency will be lifted. of new incoming users in the cloudlet with the least amount
Furthermore, not only does the total number of users have of VMs; this guarantees that the difference between the
to be considered, it is also necessary to cosnider user move- amount of hosted VM servers among the cloudlets is always
ment and congregation. There may be moments where the minimal. Gkatzikis and Koutsopoulos [15], and Mishra et
majority of users will move near a fraction of the cloudlets al. [16] achieve the same goal by using VM Migration to
and consequently connect to them, thus overworking those move VMs between cloudlets. If the migration is done from
servers while the remaining cloudlets have idle resources overworked cloudlets to ones hosting few VMs, then this
[6]. A straightforward solution to both absolute and local would improve the efficiency of cloudlet usage by avoiding
scalability is to activate additional cloudlets. However, there situations where cloudlets are idle with wasted capacity. Jia
are clear problems with this in the form of the sheer eco- et al. [17] offer a different insight on VM Migration. The
nomic cost that it entails and the time necessary to identify authors point out that too many migrations might be detri-
the problem and activate a new physical server. Thus, we mental to Service Delay, even if Processing Delay is lowered,
conclude that the best option would be to reconfigure the because of the time it takes to perform the migrations
existing cloudlets and balance the workload between them themselves and the corresponding usage of the backhaul
so that Service Delay can be controlled and minimized (or link (migrations often lead to users needing to send tasks
at the very least kept under the desired threshold) without to cloudlets they are not directly connected to). Farrugia
the need of any additional server activation. Indeed, this [18], and Zhu et al. [19] propose a different approach to
solution is preferred in the literature [7]. lowering Processing Delay. Each task would be divided into
Therefore, in this paper, we will utilize an analytical smaller, independent sub-tasks that would be individually
model of the Service Delay in MEC (which encompasses assigned to the cloudlets. Such sub-task assignments take
Transmission Delay, Processing Delay, and Backhaul De- into consideration the estimated completion time before a
lay) together with Particle Swarm Optimization (PSO, an cloudlet is selected to send the task to, therefore avoiding
Artificial Intelligence algorithm [8]) for encountering the physical servers that are already too busy and would take
configuration that best controls all three elements of Service longer to finish processing.
Delay, manages to minimize latency, and, most importantly, Analogous to the approaches focusing on improving
maximize scalability in the form of how many users the computation resources efficiency, there are works in the
system can handle without violating Service Delay/QoS literature that focus on more intelligently utilizing the re-
restrictions. To realize and enable the configuration calcu- sources related to the communication between the user
lated by our PSO algorithm, we utilize Transmission Power and the connected cloudlet. Because such communication
Control [9] and VM Migration [10] to manage the workload occurs in the wireless medium, these approaches turn to
and parameters of the cloudlets. improving metrics related to this channel, such as Signal
to Interference Plus Noise Ratio (SINR), Received Signal
Strength (RSS) and throughput. Li and Wang [20], and Suto
2 R ELATED W ORKS ON I MPROVING S CALABILITY et al. [21] propose to always connect users to the cloudlet
As mentioned before, Service Delay (the time between the that is closest to them. In scenarios with homogeneous
user producing the task (input) and receiving the corre- cloudlets (at least in the communication sense), this would
sponding results (output)) is divided into Transmission De- mean that the closest cloudlet offers the highest RSS and
0018-9340 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TC.2018.2818144, IEEE
Transactions on Computers
IEEE TRANSACTIONS ON COMPUTERS, VOL. ??, NO. ?, ? 201? 3
0018-9340 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TC.2018.2818144, IEEE
Transactions on Computers
IEEE TRANSACTIONS ON COMPUTERS, VOL. ??, NO. ?, ? 201? 4
mobile and can roam from this positions to any point inside and goes beyond the acceptable limit, the system enters
of A. Furthermore, to symbolize a growing demand, we a Configuration Phase. For example, in our proposal, this
have that at timeslot tk , α(X, tk ) new users appear in point means the transmission power levels are changed to create
X , where α(X, tk ) is our spawning function. We assume this a more efficient cloudlet configuration. Physical and virtual
infinitely increasing workload so we can test our proposal associations are reset to the new resulting RSS Voronoi
and other methods under extreme stress conditions; if they Diagram, which may lead to VMs being migrated (our
behave well in this scenario, they are able to handle simpler proposal is explained more in depth in the next section).
cases as well [38]. There are |C| cloudlets in the scenario, For this paper, we will denote initial moment as t0 ,
which all belong to set C ; we denote them by ci , with which represents the initial moment of any cloudlet configu-
0 ≤ i < |C|. ration. This can mean either the beginning of the service, or
Users have a physical and a virtual association [15]. The after Virtual Machines are migrated, or after a new cloudlet
physical association determines with which cloudlet they is activated. Basically, t0 means the timeslot after a new
communicate through a direct wireless connection. The user configuration of transmission power levels is set (with the
will send their tasks and receive output from this cloudlet. possibility of VMs having been migrated) and the virtual
The virtual association determines which cloudlet will host association of the users follows the RSS Voronoi Diagram.
the VM server associated with the user that will actually As a final note, it is important to say that some variables here
execute the tasks created and produce the output. If a user is are calculated at each timeslot while others are calculated
physically associated to cloudlet ci and virtually associated only once considering the initial user distribution. This was
to cloudlet cj , then ci and cj will communicate through a chosen for simplicity, because calculating every single vari-
backhaul link to exchange the task (received at ci from the able for all timeslots would be too complex. The selection of
user) and the output (produced by cj and then sent to the which variables would be considered was based on how
user by ci ). We assume there is a switch in the backhaul that susceptible they are to changes in the number of users.
connects all cloudlets, and each cloudlet has a physical link For example, variables related to queues vary exponentially
connected to this switch [39]. with number of users and arrival rate. This variation is also
Physical association is decided by RSS, where clients relevant to the migration of users.
will always associate with the cloudlet that currently offers
them the highest RSS value. This means that area A will be 3.2 User Mobility
divided following a Voronoi diagram between the cloudlets. We assume users move thusly [40]: the time dimension is di-
However, unlike conventional Voronoi diagrams, the sub- vided into timeslots with equal length T; at each timeslot tk ,
areas are decided based on RSS instead of only Euclidean each user chooses a random direction and moves towards
distance. Therefore, area A is divided into |C| subareas Aci , it with speed dictated by a probability density function ζ()
where all users inside Aci are associated with and will send (maximum speed is v). Each direction has an equal chance of
tasks to cloudlet ci . If a user leaves Aci and goes into Acj , being selected, so the probability that a particular direction
1
then its physical association is changed from ci to cj . The is picked is 2π .
virtual association is determined by the initial position of Therefore, the expected value of λ(X, tk ) (and
the user only, i.e. the user will virtually associate with the λci (X, tk )) is determined by how many users ended in
cloudlet that offers the highest RSS at the timeslot the user point X at the end of timeslot tk . To calculate such number,
shows up in the system. Even if the user moves, its virtual we utilize a double integral using the number of users at
association (the host of its VM server) will not change. the previous timeslot, the odds of those users picking the
As mentioned, λ(X, tk ) defines how many total users direction of X , and the odds of the users picking the speed
are in X in timeslot tk . All these users are physically that will land exactly into X at the end of the timeslot,
associated to ci if X ∈ Aci , but λ(X, tk ) tells us nothing over the area dictated by Ω(X), which is a circle with a
about their virtual association. Therefore, let us define a new radius of Tv and centered in X . Additionally, we must also
density function λci (X, tk ) that represents the total number consider the users that spawned in the point in this timeslot
Y
of users virtually associated to cloudlet ci that are located (α(X, tk )). Here, rX is the Euclidean distance between X
in X at timeslot tk . Note how λci (X, tk ) ≤ λ(X, tk ). Fig. 1 and Y , and αci (X, tk ) is analogous to α(X, tk ), but con-
illustrates the difference between those two values. Here, siders the spawning of users that are virtually associated
Acα is light gray and Acβ is dark gray. In the light gray area,
RR to cloudlet ci only (αci (X, tk ) is equal to α(X, tk ) if X is
λ(X, tk )dX is two because there are two users in that re- inside the association area of ci ; i.e. the area where ci offers
gion, the red user and the yellow user, RR who are represented the highest RSS among all cloudlets, denoted by Aci ; and it
by circles. Also in the light gray area, λcα (X, tk )dX is one is zero otherwise).
instead of two because, of the two users in that area, only
one user (the yellow one) has its VM server (represented
ZZ Y
1 rX
by theRRtriangle) hosted by cα . Analogously, in the dark gray E[λ(X, tk )] = α(X, tk ) + ζ λ(Y, tk−1 ) dY
2π T
area, λcα (X, tk )dX is two because two users in that area Ω(X)
(the purple and orange ones) have their VM servers hosted (1)
by cα ; these users are not in the light gray area, i.e., not
physically associated to cα , so they will use the backhaul E[λci (X, tk )] = αci (X, tk ) +
link and cβ as a relay to reach their VM servers. ZZ
1
Y
rX
If Service Delay increases too much (for example, be- ζ λci (Y, tk−1 ) dY (2)
cause of an overload on the backhaul caused by mobility)
2π T
Ω(X)
0018-9340 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TC.2018.2818144, IEEE
Transactions on Computers
IEEE TRANSACTIONS ON COMPUTERS, VOL. ??, NO. ?, ? 201? 5
0018-9340 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TC.2018.2818144, IEEE
Transactions on Computers
IEEE TRANSACTIONS ON COMPUTERS, VOL. ??, NO. ?, ? 201? 6
as follows. We first calculate the chance that the user will the same wavelengths to their users. Therefore, we calculate
finish in the current timeslot by dividing the length of the the interference at that previous user in X0 associated to ci
up
timeslot (τ ) by the total time needed (Qci ). Because we want by summing the RSS received by the other cloudlets.
the opposite of that (the chance that the user does not finish),
cj 6=ci
we subtract such value from 1; this gives us the rate of users X c
that do not finish per timeslot, so we divide that by the IX0 = SXj0 (16)
cj ∈C
length of the timeslot to obtain the value at users per second.
It is noteworthy how this value is only applicable if a single Finally, the average total time for transmission in the down-
timeslot is not sufficient to send a packet; otherwise, it is link is calculated in the same way as performed in (10),
ignored. Finally, the formula for the arrival rate of packets but using the average packet size for downlink (pdown ), and
at this queue for sending to cloudlet ci at instant tk is given the new values for bandwidth ((15)) and interference ((16))
by instead.
1− τup
Qc up
φci (tk ) = Λ U (t
1 ci k ) + τ
i
, if Qci > τ (12) down
RR
λ(X, 0) p dX
Λ U (t ) , otherwise. c
S i
1 ci k
Aci bdown log 1+ X
X 2 bdown N +IX
From this value, the average wait time in the uplink queue Dcdown = RR X
(17)
i
λ(X, 0)dX
for clients of cloudlet ci at instant tk follows immediately Aci
from queuing theory and is given by
These formulas allow us to calculate the average Trans-
φci (tk )τ 2 mission Delay at timeslot tk for all users in the system.
$ci (tk ) = (13) This is accomplished by utilizing the average for the users
2(1 − φci (tk )τ )
physically associated to each cloudlet, multipliying by the
Finally, to calculate the total time spent to send a packet in number of users physically associated to that cloudlet, and
the uplink, we must first determine how many timeslots are then dividing the sum of those numbers by the total amount
up
necessary by using Qci and τ . Because all timeslots involve of users for averaging.
a waiting time prior to obtaining the channel for the length
P
of the timeslot, we multiply that latter number by the sum UAci (tk )Ṫci (tk )
of the wait time and τ , which leads us to the formula below. c ∈C
Tdelay (tk ) = i P (18)
& up ' UAci (tk )
ci ∈C
up Qci
Dci (tk ) = ($ci (tk ) + τ ) (14)
τ
3.5 Processing Delay
For the downlink, we assume that each user is allocated We assume that the access to the processors at each cloudlet
a fraction of the total bandwidth through OFDM [46]. follows an M/M/k queue [45], where k is the number of
This allows us to concurrently transmit to all users of a processors for each physical cloudlet, the individual pro-
single cloudlet without worrying about collision (because cessing time for the tasks comes from a Poisson process with
the frequency bands are different). We can also utilize this average µ, and tasks arrive following yet another Poisson
to implement fairness into the model by allocating wider process. The rate for this former process comes from the
bandwidths to users in less favorable locations (i.e., ones average task generation time of a single user (Λ1 ) multiplied
that receive a weaker RSS). To do so, we allocate bandwidth by the total amount of VM servers hosted by cloudlet ci at
to each user that is proportional to the ratio of the inverse that timeslot (Vci (tk )). The total arrival rate at timeslot tk for
of the logarithm base 2 of the RSS it receives from its cloudlet ci is given by
cloudlet when compared to the sum of the same value for
all users of that cloudlet (this value is obtained through a Λci (tk ) = Vci (tk )Λ1 (19)
double integral that involves the density function and the
corresponding area). Because we use the inverse, users with Queuing theory tells us that the average occupation rate for
higher RSS will get less bandwidth, and vice versa. Thus, cloudlet ci at timeslot tk can then be derived by
assuming that X0 ∈ Aci and B down is the total downlink
Λci (tk )
bandwidth (Hertz), the bandwidth allocated to a user in X0 ρci (tk ) = (20)
is kµ
−1 From this, the chance of waiting in the processors’ queue at
ci cloudlet ci at instant tk also comes instantly from queuing
SX 0
bdown
X0 = B
down
−1 (15) theory.
RR ci
(λ(X, 0)SX ) dX
(kρci (tk ))k
Acj Ψci (tk ) =
k!
Here, although transmissions to clients of the same cloudlet k−1
!−1
X (kρc (tk ))n (kρci (tk ))k
do not interfere with each other, there is still interference (1 − ρci (tk )) i
+
coming from the other cloudlets. This occurs because there n=0
n! k!
is no coordination between cloudlets, so they do not allocate (21)
0018-9340 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TC.2018.2818144, IEEE
Transactions on Computers
IEEE TRANSACTIONS ON COMPUTERS, VOL. ??, NO. ?, ? 201? 7
0018-9340 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TC.2018.2818144, IEEE
Transactions on Computers
IEEE TRANSACTIONS ON COMPUTERS, VOL. ??, NO. ?, ? 201? 8
Fig. 2. Illustrative scenarios of how user mobility ((a) and (c)) and new users ((b) and (d)) can increase Service Delay and how cloudlet activation
((a) and (b)) and reconfiguration ((c) and (d)) can counteract this.
Algorithm 1 - NextViolation: estimate next time Service Algorithm 2 - PSO: algorithm for maximizing MEC scala-
Delay will grow beyond threshold H bility
Input: configuration g of transmission power levels of Input: current time t0
cloudlets, current time t0 1: for all m ∈ M do initialize qm as a random |C|-tuple
1: t̂k ← t0 2: for all m ∈ M do initialize vm as a random |C|-tuple
2: while Sdelay (t̂k ) < H do
3: while executed iterations < L do
3: t̂k ← t̂k + 1 4: for all m ∈ M do
4: return t̂k 5: t̂k ← N extV iolation(qm , t0 )
6: if t̂k > N extV iolation(hm , t0 ) then hm ← qm
7: if t̂k > N extV iolation(g, t0 ) then g ← qm
threshold, we are increasing the number of users that can 8: ϕh ← randomInt(0, 1)
be served without a new reconfiguration/activation. Note 9: ϕg ← randomInt(0, 1)
that Service Delay is calculated in line 2 in the algorithm 10: vm ← ϑvm + ϕh %h (qm − hm ) + ϕg %g (qm − g)
through Equation 26, with input g being utilized in this 11: qm ← qm + vm
calculation. We chose transmission power levels as the 12: return (g, N extV iolation(g, t0 ))
configuration because both physical and virtual associations
are reset in the Configuration Phase to follow RSS levels,
which are ultimately decided by the cloudlets transmission
power levels. Input g is a tuple with |C| values. However, PSO [8] is an algorithm where particles travel through
the problem of finding the input that maximizes the out- the search space of all possible solutions looking to max-
put of Algorithm 1 has |C| decision variables, making it imize the value of a chosen fitness function; each particle
too complex to be solved by conventional methods (e.g., represents a possible solution to the problem. After each
mathematically or through Linear Programming) for high iteration, the particles update their positions (and therefore
amounts of cloudlets. Thus, because an optimal solution their corresponding solutions) based on their velocities,
is unreachable in a feasible time, we opted to utilize the which are themselves updated at the end of iterations based
Artificial Intelligence algorithm PSO to find a near-optimal on three components: the previous velocity, the best solution
configuration. found so far locally (by that particle), and the best solu-
0018-9340 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TC.2018.2818144, IEEE
Transactions on Computers
IEEE TRANSACTIONS ON COMPUTERS, VOL. ??, NO. ?, ? 201? 9
0018-9340 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TC.2018.2818144, IEEE
Transactions on Computers
IEEE TRANSACTIONS ON COMPUTERS, VOL. ??, NO. ?, ? 201? 10
140 140
135 135
Maximum Number of Users
125 125
120 120
115 115
110 110
105 105
100 100
0 2 4 6 8 10 0 2 4 6 8 10
Fig. 3. Maximum number of users served by our proposal with five Fig. 4. Maximum number of users served by our proposal with five
cloudlets under different values for PSO local inertia (ϑ). cloudlets under different values for PSO local acceleration (%h ).
0018-9340 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TC.2018.2818144, IEEE
Transactions on Computers
IEEE TRANSACTIONS ON COMPUTERS, VOL. ??, NO. ?, ? 201? 11
one that offered the highest RSS at the previous Configu- 0.8
ration Phase). With no migration or reconfiguration being Min Processing
executed, this is the simplest method and it is used as a Min Flow
benchmark (what can be achieved by doing nothing). The 0.6
second method is the Minimum Processing method [15],
where VM servers are migrated at all Configuration Phases 0.4
and everytime a user joins the service. This guarantees that
the computation workload is perfectly balanced. In addi-
tion, it minimizes Processing Delay by using the processor 0.2
resources of all cloudlets as efficiently as possible, but it
also ignores Backhaul Delay (migrations are done with no
0
regards to the cost of transmission between cloudlets) and
0 200 400 600 800 1000
Transmission Delay (cloudlets may perfectly share virtual
associations, but no control is done on physical associa- Number of Users
tions). Finally, we have the Minimum Flow method [17],
where at each Configuration Phase, the cloudlets perform Fig. 7. Probability that latency is below the threshold given the number
just enough migrations between themselves so that the of users using Application B (heavier burden on processing).
decrease in Processing Delay is enough to keep the average
Service Delay under the determined threshold. Because the
minimum number of necessary migrations is performed, threshold with 10 or fewer cloudlets activated, while utiliz-
this minimizes the load in the backhaul link, but it still does ing Application A as a service. We can see here how the Min-
nothing to control physical association and Transmission imum Processing method, despite minimizing Processing
Delay. In addition, the Minimum Flow only activates a new Delay, behaves almost as bad as the No Migration method
cloudlet if the computing workload is already perfectly bal- because of the high load it introduces to the backhaul
anced, but latency is still too high. The performance evalu- link with its many migrations. Meanwhile, the Minimum
ation of these three conventional methods and our proposal Flow method is capable of outperforming both due to its
is done in scenarios that begin with one active cloudlet and consideration of both Backhaul Delay and Processing Delay;
allow up to ten active cloudlets. Additionally, we analyze however, the proposed method is the best at staying below
the results with three different types of applications: A the latency limit because it is the only method that con-
(which has no higher leaning towards either transmission siders communication and control physical association with
or processing), B (which needs more processing in the form Transmission Power Control. Because our method focuses
of higher time needed to execute the tasks) and C (which on using as few cloudlets as possible, it aims at delaying
has bigger packets to be sent both in downlink and uplink, reconfigurations and cloudlet activations, which ends up
therefore requiring more attention to transmission). increasing the number of users served per cloudlet.
Fig. 6 contains the performance of all methods in terms As seen in Fig. 7, because of the higher burden on
of what the probability is that the latency is under the computation from Application B, the Minimum Processing
0018-9340 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TC.2018.2818144, IEEE
Transactions on Computers
IEEE TRANSACTIONS ON COMPUTERS, VOL. ??, NO. ?, ? 201? 12
1 600
Proposal
Proposal
0.2
100
0 0
0 200 400 600 800 1000 A B C
Fig. 8. Probability that latency is below the threshold given the number Fig. 9. Maximum number of users for all methods under all applications.
of users using Application C (heavier burden on transmission).
maximizing scalability.
method has better relative performance when compared to
the Minimum Flow method because the former minimizes 6 C ONCLUSION AND F UTURE W ORKS
Processing Delay. However, it is still worse than the lat-
MEC is a method for delivering powerful cloud resources to
ter because it ignores Backhaul Delay. Analogously, both
users on the edge of the network. For this kind of service,
methods are still outperformed by our proposal because
the delay must be kept under a threshold to maintain a
we are the only one to consider Transmission Delay, which
high-quality experience and transparency. In high workload
affords us greater control over latency and to serve more
scenarios, this can be achieved by activating new cloudlets
users, even in scenarios where computation is more relevant
or reconfiguring and optimizing the already active ones,
than communication. In fact, Transmission Delay is very
where the former is preferred due to its economy and
important even in this kind of scenario. Which is why
scalability.
the proposal is capable of serving many more users, as
In this paper, we presented a method for reconfiguring
evidenced by how early in comparison the curves for the
cloudlets with the goal of maximum scalability. Our pro-
conventional methods end.
posal is the first to consider all aspects of the Service Delay
Finally, we measure the performance under Application
in MEC (namely Processing Delay, Transmission Delay and
C, which requires the users and the cloudlets to send big-
Backhaul Delay) in a multi-user multi-cloudlet scenario. We
ger packets, which demands more transmission time. This
have tuned PSO for our proposal by a hyper-parameter
higher importance of the Transmission Delay and Backhaul
study that proved that, in the MEC scalability problem,
Delay means the Minimum Processing method has its worse
exploitation is better than exploration on the search space
performance yet when compared to the Minimum Flow
of possible solutions when looking for the optimal configu-
method, as evidenced by Fig. 8. This can be explained by the
ration. Additionally, we compared our proposal with other
high burden that the former introduces in the link between
methods that consider only a fraction of the elements of
cloudlets, whose effect is accentuated by the bigger packets
Service Delay and proved how we can serve more users
being sent in such link. Nonetheless, the proposal is still
given a latency limit by considering all elements simulta-
the one to offer the best chances of having the threshold
neously. Furthermore, the same high-quality results can be
respected because it is the only one to truly consider Trans-
achieved under different application profiles, proving how
mission Delay.
our proposal is capable of providing a scalable service in
The advantages brought by our proposal are more scenarios with different types of requirements.
clearly seen in Fig. 9, where it can be noted how we are For the future, we plan to study the cloudlet deploy-
able to serve up to three times more users when compared ment problem as well as the possible cooperation between
with the conventional methods. We can also see here how cloudlets and backbone cloud servers and the use of wire-
the difference between Minimum Processing and Minimum less communication in the backhaul link between cloudlets.
Flow is the smallest when computation is more relevant
(Application B) and the biggest when it is less relevant
(Application C). In addition, it is obvious how all methods ACKNOWLEDGMENTS
behave worse when communication is more significant, The research results present in this paper have been
which points to higher difficulties overall with dealing achieved by Research and Development on Intellectual ICT
with Transmission Delay (even for our proposal). This only System for Disaster Response and Recovery, the Commis-
strenghtens our point that considering Transmission Delay sioned Research of the National Institute of Information and
is essential both when lowering Service Delay and when Communications Technology (NICT), Japan.
0018-9340 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TC.2018.2818144, IEEE
Transactions on Computers
IEEE TRANSACTIONS ON COMPUTERS, VOL. ??, NO. ?, ? 201? 13
0018-9340 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TC.2018.2818144, IEEE
Transactions on Computers
IEEE TRANSACTIONS ON COMPUTERS, VOL. ??, NO. ?, ? 201? 14
[40] C. Bettstetter, G. Resta, and P. Santi, “The node distribution of the Hiroki Nishiyama (SM’13) received the M.S.
random waypoint mobility model for wireless ad hoc networks,” and Ph.D. degrees in information science from
IEEE Transactions on Mobile Computing, vol. 2, no. 3, pp. 257–269, Tohoku University, Japan, in 2007 and 2008,
July 2003. respectively. He is an Associate Professor
[41] C. E. Shannon, “A Mathematical Theory of Communication,” The with the Graduate School of Information Sci-
Bell System Technical Journal, vol. 27, no. 3, pp. 379–423, 1948. ences, Tohoku University. He has published over
[42] K. McClaning and T. Vito, Radio Receiver Design. Noble Publishing 160 peer-reviewed papers including many high-
Corporation, 2000. quality publications in prestigious IEEE journals
[43] B. Sklar, “Rayleigh fading channels in mobile digital communica- and conferences. His research interests cover a
tion systems. I. Characterization,” IEEE Communications Magazine, wide range of areas including satellite commu-
vol. 35, no. 7, pp. 90–100, July 1997. nications, unmanned aircraft system networks,
[44] G. Miao, J. Zander, K. W. Sung, and S. B. Slimane, Fundamentals of wireless and mobile networks, ad hoc and sensor networks, green
Mobile Data Networks. Cambridge University Press, 2016. networking, and network security. He was a recipient of the Best Paper
[45] I. Adan and J. Resing, Queueing theory. Eindhoven University of Awards from many international conferences, including IEEE’s flagship
Technology Eindhoven, 2002. events, such as the IEEE Global Communications Conference (GLOBE-
[46] E. Vanin, “Analytical model for optical wireless OFDM system COM) in 2014, 2013, and 2010; the IEEE International Conference on
with digital signal restoration,” in Proceedings of the 2012 IEEE Communications (ICC) in 2016 (ICC); and the IEEE Wireless Commu-
GLOBECOM Workshops, December 2012, pp. 1213–1218. nications and Networking Conference (WCNC) in 2014 and 2012, the
[47] C. A. Brackett, “Dense wavelength division multiplexing net- Special Award of the 29th Advanced Technology Award for Creativity
works: principles and applications,” IEEE Journal on Selected Areas in 2015, the IEEE ComSoc AsiaPacific Board Outstanding Young Re-
in Communications, vol. 8, no. 6, pp. 948–964, August 1990. searcher Award 2013, the IEICE Communications Society Academic
[48] D. P. Tian, “A Review of Convergence Analysis of Particle Swarm Encouragement Award 2011, and the 2009 FUNAI Foundations Re-
Optimization,” International Journal of Grid and Distributed Comput- search Incentive Award for Information Technology. He currently serves
ing, vol. 6, no. 6, pp. 117–128, December 2013. as an Associate Editor for Springer Journal of Peer-to-Peer Networking
[49] M. J. Islam, X. Li, and Y. Mei, “A time-varying transfer function and Applications, and the Secretary of IEEE ComSoc Sendai Chapter.
for balancing the exploration and exploitation ability of a binary One of his outstanding achievements is Relay-by-Smartphone, which
PSO,” Applied Soft Computing, vol. 59, pp. 182–196, October 2017. makes it possible to share information among many people by device-
[50] O. Olorunda and A. P. Engelbrecht, “Radio Environment Aware to-device direct communication. He is a Senior Member of the IEICE.
Computation Offloading with Multiple Mobile Edge Computing
Servers,” in Proceedings of the 2008 IEEE Congress on Evolutionary
Computation (IEEE World Congress on Computational Intelligence),
June 2017, pp. 1–5.
0018-9340 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.