Professional Documents
Culture Documents
A Novel Strategy To Achieve Bandwidth
A Novel Strategy To Achieve Bandwidth
fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2020.3003263, IEEE Access
xxxx.
Digital Object Identifier
ABSTRACT
Recently, IoT (Internet of Things) has been an attractive area of research to develop smart home, smart
city environment. IoT sensors generate data stream continuously and majority of the IoT based applications
are highly delay sensitive. The initially used cloud based IoT services suffers from higher delay and lack
of efficient resources utilization. Fog computing is introduced to improve these problems by bringing
cloud services near to edge in small scale and distributed nature. This work considers an integrated fog-
cloud environment to minimize resource cost and reduce delay to support real-time applications at a lower
operational cost. We first present a cooperative three-layer fog-cloud computing environment, and propose
a novel optimization model in this environment. This model has a composite objective function to minimize
the bandwidth cost and provide load balancing. We consider balancing load in both links’ bandwidth
and servers’ CPU processing capacity level. Simulation results show that our framework can minimize
the bandwidth cost and balance the load by utilizing the cooperative environment effectively. We assign
weight factors to each objective of the composite objective function to set the level of priority. When
minimizing bandwidth cost gets higher priority, at first, the demand generated from the traffic generator
sensors continues to be satisfied by the regional capacity of layer-1 fog. If the demand of a region goes
beyond the capacity of that region, remaining demand is served by other regions layer-1 fog, then by layer-2
fog, and finally by the cloud. However, when load balancing is the priority, the demand is distributed among
these resources to reduce delay. Link level load balancing can reduce the queueing delay of links while server
level load balancing can decrease processing delay of servers in an overloaded situation. We further analyzed
how the unit bandwidth cost, the average and maximum link utilization, the servers’ resources utilization,
and the average number of servers used vary with different levels of priority on different objectives. As
a result, our optimization formulation allows tradeoff analysis in the cooperative three-layer fog-cloud
computing environment.
INDEX TERMS Fog Computing, IoT, Optimal Resource management, Load Balancing, Task Offloading
VOLUME 4, 2016 1
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2020.3003263, IEEE Access
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2020.3003263, IEEE Access
provide quality and prompt services in different areas; health by incorporating optimization libraries within the Robot Op-
care is one them. This sector is greatly benefited with the erating System (ROS). It is deployed on robotic sensors. A
advent of fog computing [14], [15]. However, transmission novel server level load balancing strategy for the effective
delay and processing delay have significant impact in data usage of server resources and reduction of delay or latency
transmission between user equipments and fog servers [16]. using SDN in cooperative Internet of Vehicles (IoV) network
Different optimization models are proposed in [8], [16] to is proposed in [6] and [7]. In [7], they used fog concept
overcome these constraints of processing and transmission in cluster computing with local and global load balancing
delay. A hybrid task offloading scheme combining cloud using local and global load balancer (SDN) respectively.
and fog is proposed to reduce the energy consumption in A framework is developed in [23] to analyze the trade off
communication network and computation considering delay between power consumption and transmission delay in a fog-
as a constraint [17]. cloud computing system. This work considered the workload
A fog computing environment comprises with devices of distribution by addressing the overall power consumption
different characteristics. To maintain an uninterrupted con- as the objective function. Our work differs in two ways:
nection among these devices, a proper integration and man- we consider a three layer fog-cloud framework and we use
agement scheme is necessary [25]. Fogernetes, a fog comput- a composite objective function that optimizes the tradeoff
ing platform that enables management and deployment of fog between bandwidth cost in the edge-to-fog and fog-to-cloud
applications, is presented in [26]. A simple and general model components as well as network-level and server-level load
for fog computing infrastructures to continuously maintain balancing..
the required QoS in multicomponent IoT applications is In [24], they proposed a dynamic load balancing mecha-
proposed in [27]. Specially, to connect large number of nism based on graph repartitioning. Load balancing for new
heterogeneous nodes, adoption of SDN and NFV techniques node joining into the system and leaving from that system
are effective [28]. In [29] authors critically reviewed the is also studied. In [31], a load balancing scheme to jointly
SDN and NFV for fog computing-based solutions to combat balance the load in both network and server level is presented.
against the main challenges of IoT. An optimization model to minimize a composite objective
Hill Climbing Load Balancing Algorithm on fog comput- function, consisting of bandwidth and energy cost, is formu-
ing is proposed in [5]. However, the scope of this work is lated and evaluated in [32]. However, the works of [31] and
limited for smart grid where load balancing is done only [32] are done only for a cloud computing environment and
at VMs/servers level. Different approaches for server level did not consider a joint fog-cloud computing architecture.
load balancing to balance the load among fog servers in fog All these works discussed thus far considered a number of
computing environment are proposed in [18], [19]. However, issues related to the cloud and fog computing environment.
several fog computing applications like augmented reality, However, to our knowledge, no work has considered the joint
surveillance, and smart cities usually have a great extent of optimization of bandwidth cost and load balancing, both at
demand for both bandwidth and servers’ resources. Thus, an the network and the server level together in a cooperative
optimization framework dealing with link level and server three layer fog-cloud computing environment. Secondly, we
level load balancing jointly is required. Multilevel load bal- study the fog-cloud trade-off by varying priorities of different
ancing for fog computing is proposed in [20]. The problem of objectives. This is the key contribution of our work beyond
uneven load distribution for static load balancing is overcome the state of the art thus far; therefore, our work fills a
here using historical resource selection. However, due to the significant gap in this area of research.
transition of loads from one micro data-center to another
micro data center and to cloud, link level load balancing III. SYSTEM ASSUMPTION AND PROBLEM
needs to be analyzed here. FORMULATION
Cooperative fog computing system having the capability In our framework, fog servers are positioned as intermediate
of offloading computational tasks in a fairness environment is compute nodes. The fog layer consisting of fog servers are in-
introduced in [21]. In this work, authors proposed a joint opti- dependent without having dependency from specific devices.
mization of QoE (average response time) and energy (average In case of emergency situations such as natural disaster,
energy consumption) in an integrated fog computing plat- device failure, or due to the spatial and temporal diversity
form. A fog computing architecture along with a framework of demand, additional compute and network resources might
to improve the QoS for IoT applications is proposed in [30]. be required. Thus, a cooperative fog computing scheme
Their proposed system is supposed to support cooperation should be designed where these additional resources can
among fog nodes in a given location to allow data processing be borrowed from nearby regions of the same fog layer.
in a shared mode. At the same time, it can satisfy QoS and This is layer-1 in the fog topology as shown in Fig. 1 (if
serve largest number of service requests. However, there is idle resources available). Also these resources can be taken
lack of feasibility study and simulation evidence in this work. from more powerful layer (layer-2), or even from the cloud.
A framework to optimize energy consumption by improv- Therefore, we consider task offloading from fog to cloud or
ing battery usage and reducing delay in a fog computing en- from edge to core on demand.
vironment is shown in [22]. This framework is implemented Now, a regional gateway is itself a fog device, cluster point,
VOLUME 4, 2016 3
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2020.3003263, IEEE Access
Parameters
Link level Server level Bandwidth Delay/ Energy Number Cooperative
Is SDN
Reference Papers load load cost latency Consumption of fog fog
based?
balancing balancing minimization minimization minimization layers computing
[5] Zahid et al. No Yes No Yes No One No Yes
[6] He et al. No Yes No Yes No One Yes Yes
[7] Kadhim at el. No Yes No Yes No One Yes Yes
[16] Chen at el. No No No Yes No One No No
[17] Meng at el. No No No No Yes One Yes No
[18] Khattak at el. No Yes No Yes No One Yes No
[19] Xu at el. No Yes No No No Two Yes No
[20] Mostafa at el. No Yes No Yes No One Yes No
[21] Dong at el. No No No Yes Yes One Yes No
[22] Mushunuri at el. No No No Yes Yes One No No
[23] Deng at el. No No No Yes Yes One No No
[24] Ningning at el. Yes Yes No No No One Yes No
Our proposed Yes Yes Yes No No Two Yes Yes
and OpenFlow protocol supported switch. Again, the LAN servers’ resource demand within a region transferred through
ports of the gateway may be connected with independent CP to be processed. We do not consider prioritizing the
servers or with fog devices (other LAN switches). These fog aggregated requests generated from any one CP within a
devices are other candidates for becoming fog servers. Fog region over the aggregated requests of other CPs of different
devices can send their overall conditions including current regions in this work.
load, remaining capacity, and other required information to We formulate a MILP optimization model to mathemat-
SDN controller over OpenFlow communication. Using this ically represent the system assumption of this work. Then,
information, SDN controller can formulate data forwarding we consider that an SDN controller is responsible to solve
rules and flow tables of cluster points (CP) are populated, the optimization model and allocate resources according to
accordingly. Data flow and open flow communication among the solution like [6]. Here the SDN controller is placed
fog devices, cloud and SDN controller are shown in Fig. 1 in fog layer-2 which has Open Flow communication with
for better understanding. We assume that the total servers’ network devices and cloud. The aggregated demand profile
resource (computational capacity) demand can be satisfied is sent from each region of fog layer-1 to the SDN controller
primarily in its own region by each CP, then by the other where it is solved using the model. Since, the size of demand
regions of fog layer-1 server nodes, then by fog layer-2 server profile is negligible compared to the size of data needs to
nodes, and finally any higher computational requirement be transferred, we ignore the cost of transferring demand
would need to be satisfied by the cloud servers. Hence, a profile in this work. Then, control signals are generated from
path with sufficient bandwidth needs to be established from the controller and sent using the control plane to allocate
a CP to a server, if that server is used to serve any portion of resources to the responsible devices based on the solution.
request generated from that CP. The notations used to formulate the model is explained in
Table 2.
In this work, we consider the data sensed or collected
by the sensors, which are attached with the IoT devices
need bandwidth and servers’ resources to be processed. We A. CONSTRAINTS
denote the sensed data that require further processing to In our formulation, Hi is the total amount of bandwidth
take automated decision from each sensor node as a request. demand generated from all sensor nodes within the region of
Thus, each request consists of 2-tuple hh, ri. Here, h is the cluster point i. Thus, Hi can be mathematically represented
bandwidth demand and r is the servers’ resource demand. To by:
X
formulate the mathematical model, we consider aggregated Hi = hni , i ∈ I (1)
requests generated from all sensor nodes within a region. n∈Ni
The CP is the central access point of a region which is
Similarly, total amount of resource demand generated from
connected with all the sensor nodes within this region. Thus,
all sensor nodes under cluster point i is represented by Ri :
we consider CP as the source of this aggregated request. Now,
this aggregated requests also consist of two demands, i.e.,
X
Ri = rni , i ∈ I (2)
Hi as aggregated bandwidth demand and Ri as aggregated n∈Ni
4 VOLUME 4, 2016
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2020.3003263, IEEE Access
Gateway/CP
Virtual Machine
ONU Splitter OLT
Ethernet switch
Servers
Cloud Router
Data and OpenFlow
communication
Wireless Connection
Cloud Layer
Access Network
Fog Layer-1
IoT Layer
Region 1
Region 2 Region 3 Region 4
Initially, to transfer the data generated from the CPs to be satisfied by the available bandwidth of its own region,
each server of fog layer-1 (f 1 ), fog layer-2 (f 2 ), or cloud another region’s layer-1 fog, or the layer-2 fog, or by the
(c) for necessary processing, sufficient bandwidth needs to cloud. Therefore, the total amount of bandwidth allocation
be allocated from CP i to the responsible server or servers. can be considered as sum of the allocated bandwidth to
establish paths from CPs to the servers of all these layers.
Now, according to our assumption of cooperative fog
computing, the bandwidth demand generated from CP i can
VOLUME 4, 2016 5
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2020.3003263, IEEE Access
TABLE 2. Notations used in formulation The bandwidth that is allocated to a particular path from
CP i to server s is given by using the path flow variables xsip :
Constants: X
Symbol Definition xsip = yis , i ∈ I, s ∈ S (5)
s
p∈Pip
f 1, F1 Index, set of servers in fog layer-1
f 2, F 2 Index, set of servers in fog layer-2 If any bandwidth is allocated on particular path p to satisfy
c, C Index, set of servers in cloud a portion of the request of bandwidth demand Hi from any
Index, set of all available servers which is the CP i, then all the links associated with that path have to carry
s, S that portion of demand Hi . Therefore, we can determine the
combination of servers in fog layer-1, 2, and cloud
i, I Index, set of cluster points (CP) flow on each link l for all paths from i to s:
X
s
n, Ni Index, set of sensor nodes under cluster point δipl xsip = zils , l ∈ L, i ∈ I, s ∈ S (6)
s
Pip Set of paths from CP i to server s s
p∈Pip
hni Bandwidth demand generated by sensor node n at CP i The total amount of bandwidth required on link l must
rni Resource demand generated by sensor node n from CP i not exceed the capacity of that link times the maximum
Hi
Aggregated bandwidth demand generated by all sensor utilization variable of any link. This constraint is required to
nodes from CP i
ensure link level load balancing.
Aggregated Resource demand generated by all sensor
Ri
XX
nodes from CP i zils ≤ ρl u, l ∈ L (7)
Per unit cost of bandwidth consumed from link l for i∈I s∈S
sl
requests served by server s Note that the maximum utilization of any link cannot be
ρl Available capacity on link l more than 1 at any point:
as Capacity of server s
0 ≤ u ≤ 1. (8)
Link-path indicator: 1 if path p which is set up from CP
s
δipl i to server s uses link l in order to satisfy demand of CP Similar to (3), the resource demand generated from each
i by server s, 0 otherwise region i can be satisfied by the available servers’ resources
Weight parameters related to three optimization (CPU processing capacity) of the servers within own region
α, β, γ
objectives
and other regions of layer-1 fog f 1 , layer-2 fog f 2 , or
Variables:
by cloud c. Therefore, total amount of servers’ resources
Symbol Definition available is the sum of servers’ resources of fog layer-1, fog
1
yif Bandwidth allocation from CP i to fog layer-1 server f 1 layer-2, and cloud.
2
yif Bandwidth allocation from CP i to fog layer-2 server f 2
1 2
yic
X X X X
Bandwidth allocation from CP i to cloud server c gis = gif + gif + gic , i∈I (9)
yis Bandwidth allocation for traffic from CP i to server s s∈S f 1 ∈F 1 f 2 ∈F 2 c∈C
Bandwidth allocation in path p, if traffic from CP i to Now, the total amount of resource demand required can be
xsip
server s uses path p
split into all available servers:
s Total bandwidth demand from link l for requests
zil X
generated from CP i and served by server s gis = Ri , i ∈ I (10)
u Max. utilization of all links s∈S
1 Server’s resource allocation from fog layer-1 server f 1 Constraint (11) is used to ensure a proportional allocation
gif
for CP i between servers’ resource and bandwidth. This indicates if
2 Server’s resource allocation from fog layer-2 server f 2 more computational capacity is used from server s to satisfy
gif
for CP i
computational demand generated from CP i, then, more
gic Server’s resource allocation from cloud server c for CP i
bandwidth will be allocated from that CP i to that server s.
gis Server’s resource allocation from server s for CP i
Hi gis = Ri yis , i ∈ I, s ∈ S (11)
k Max. utilization of all servers
The servers’ resource requirement from all CPs i ∈ I to
server s must not exceed the available resources of that server
These constraints are represented by (3) and (4): times the maximum utilization of any server. This constraint
is used to balance the load among different servers.
X X 1 X 2 X
yis = yif + yif + yic , i∈I (3) X
s∈S f 1 ∈F 1 f 2 ∈F 2 c∈C
gis ≤ as k, s∈S (12)
i∈I
Since, the maximum utilization of server s cannot be more
X
yis = Hi , i∈I (4) than 1, we have
s∈S
0 ≤ k ≤ 1. (13)
6 VOLUME 4, 2016
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2020.3003263, IEEE Access
B. OBJECTIVE FUNCTION by the fog service provider. In practice, compared to the fog
There are three goals of this work: (i) to minimize bandwidth layer, the capacity of cloud is very high.
cost of routing, (ii) to minimize maximum link utilization We used AMPL/CPLEX (v 12.6.0.0) as the tool to solve
of network links, and (iii) to minimize the maximum server the MILP program that minimizes (14) subject to the con-
resource utilization. These goals are presented with the com- straints (3) to (13). For the cases we investigated, CPLEX
posite objective function: required around 20 ms on an average to solve the MILP
XXX model each time. We also used shell scripting for result
min α sl zils + βu + γk (14) analysis. We conducted the experiment on HP ProBook 450
s∈S i∈I l∈L G4 Notebook PC with Linux OS, having Intel Core i5 2.5
GHz processor and 8 GB RAM.
Here, by using three different weight factors α, β, γ and We conducted several case studies. These help to inves-
varying their values for each of the three parts in the objective tigate how the amount of network and resource support
function, we can change the priority associated with each provided from the links and servers of different fog layers,
part. Secondly, we can change the unit cost sl to properly and cloud vary with the change of priority in bandwidth
reflect the bandwidth cost (and thus indirectly, the delay cost) cost minimization and load balancing. We also studied this
depending on the location of the link and server in the three- problem considering that both homogeneous and heteroge-
layer architecture. In summary, the goal of the optimization neous type of demand are generated from the cluster points.
problem is to minimize (14) subject to the constraints (3) to All the cases studied in this paper are summarized in Table
(13). 4. In the homogeneous condition, the amount of demand
generated from each CP is same while in the heterogeneous
IV. SIMULATION STUDY SETUP AND RESULT ANALYSIS condition, the amount of demand generated from each CP
To analyze this composite optimization problem, we used can be different. The bandwidth and resource demand sets
the fog computing architecture having layer-1 fog, layer- used for homogeneous and heterogeneous conditions are
2 fog, and cloud nodes as shown in Fig. 1. We consider presented in Table 5 and 6, respectively.
that a portion of cloud’s computational resources is used to
support this cooperative fog computing network. To emulate A. HOMOGENEOUS DEMAND SETS
this setup, we assume that maximum number of servers used
from cloud by the service provider of this cooperative fog 1) Case-1 (Changes in Bandwidth Cost)
computing network is fixed (due to monetary constraint). Fig. 2 shows the changes of bandwidth cost with α, i.e.,
However, since the cloud services can be used on demand priority to minimize the bandwidth cost. The figure is drawn
as utility, the number of servers used will vary based on the in logarithmic scale considering the amount of demands
requirement to reduce cost, but it will not exceed the maxi- generated from each CP as same (Homogeneous Demand).
mum number. For simplicity, we also consider the capacity One of the key findings in this case is that when lower priority
of all available servers from each layer as the same. Thus, we is given on bandwidth cost minimization (small values of α),
vary the available computational capacity of each layer by the bandwidth demand is distributed among all the possible
varying the number of available servers. destinations, i.e., layer-1 fog (own or neighbor regions),
The variation in the value of εsl is used to emulate variable layer-2 fog, and cloud nodes. This incurs the highest band-
bandwidth cost to reach servers of different layers. The num-
ber of links/hops required can emulate the distance to reach
servers of a layer. The combination of εsl and number of links TABLE 3. Topology Related Parameters
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2020.3003263, IEEE Access
Conditions Cases
Case-1: How bandwidth cost changes as the priority on bandwidth cost minimization weight factor (α) increases.
Case-2: How maximum and average link utilization changes as the priority on link level load balancing (β) increases.
Homogeneous Case-3: How maximum server resource utilization and average number of servers used changes as priority on server
level load balancing (γ) increases.
Case-4: Effect of bandwidth demand on bandwidth cost and average link utilization.
Case-5: Effect of resource demand on average server resource utilization.
Case-1: How bandwidth cost changes as the priority on bandwidth cost minimization weight factor (α) increases.
Case-2: How maximum and average link utilization changes as the priority on link level load balancing (β) increases.
Heterogeneous Case-3: How maximum server resource utilization and average number of servers used changes as priority on server
level load balancing (γ) increases.
h1
1 h2
TABLE 5. Homogeneous Case: Bandwidth and Resource Demand h3
h4
0.8
Bandwidth Demand (Mbps)
Maximum Link Utilization
r2 10 10 10 10
r3 15 15 15 15
r4 20 20 20 20 FIGURE 3. β versus maximum link utilization
8 VOLUME 4, 2016
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2020.3003263, IEEE Access
0.1
0.6
0.4
0.05
0.2
0 0
0 100 200 300 400 500 600 700 800 100 200 300 400 500 600 700 800 900 1000 1100 1200
FIGURE 4. β versus average link utilization FIGURE 5. γ versus maximum server resource utilization
0.55
0.45
3) Case-3 (Maximum Server Resource Utilization and
Average Number of Servers Used) 0.4
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2020.3003263, IEEE Access
4) Case-4 (Effect of increment in Bandwidth Demand) Bandwidth Demand Vs Average Link Utilization
1
Fig. 7 presents how the bandwidth cost changes with the in-
0.9
crease in the bandwidth demand for setting different levels of
priorities on different components of the objective function. 0.8
balancing. This is due to the fact that when priority is given 0.1
minimizing bandwidth cost
link level load balancing
on link level load balancing, the allocation of bandwidth server level load balancing
0
demand is done as evenly as possible among different links. 0 20 40 60 80
Bandwidth Demand
100 120 140
104 Bandwidth Demand Vs Bandwidth Cost Resource Demand Vs Average Resource Utilization
4.5 1
4 0.9
0.8
3.5
Average Resource Utilization
0.7
3
Bandwidth Cost
0.6
2.5
0.5
2
0.4
1.5
0.3
1
0.2
FIGURE 7. Bandwidth demand versus bandwidth cost FIGURE 9. Resource demand versus average server resource utilization
10 VOLUME 4, 2016
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2020.3003263, IEEE Access
h1 h2 h3 h4 h5
1.2
0.6
0.4
0.2
0
0 200 400 600 800 1000 1200
FIGURE 10. α versus bandwidth cost for different demand sets FIGURE 11. β versus maximum link utilization for different demand sets
For link level load balancing, the loads are distributed among 2) Case-2 (Maximum and Average Link Utilization)
all possible destinations of the fog layers and cloud. Further-
more, since the resource capacity of upper level fog layer and
cloud is higher, there is less possibility of the servers being TABLE 6. Heterogeneous Case: Bandwidth and Resource Demand
0.3
sets. At this condition, the demand is mostly mitigated by
layer-1 fog. Since, layer-1 fog is the nearest layer from the
CPs in the three layer fog-cloud architecture, thus, homoge-
neous demand set results in lowest bandwidth consumption.
As the difference between the amount of bandwidth increases 0.25
VOLUME 4, 2016 11
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2020.3003263, IEEE Access
r5
0.7 0.7
0.6 0.65
0.5 0.6
0.4 0.55
0.3 0.5
0.2 0.45
0 10 20 30 40 50 60 70 80 0 10 20 30 40 50 60 70 80
FIGURE 13. γ versus maximum server resource utilization FIGURE 14. γ versus average number of server used
Fig. 11 and Fig. 12 represent how the maximum link utiliza- Fig. 14 shows the changes of the average number of server
tion and the average link utilization vary with the increase used with γ. As γ is increased, the average number of server
of β, respectively, for different levels of heterogeneity in used increases as the load is distributed among different
bandwidth demand sets. Initially, from these analyses, it layers. For resource demand set r1, the average number of
can be stated that the maximum link utilization and the server used reaches the maximum value at the lower value
average link utilization decrease as we increase β, which is of γ as compared to the resource demand r5. This can be
similar to our previous findings. However, from this study, considered as just the opposite of γ versus the maximum
we can also observe that as we continue to increase the level resource utilization case.
of heterogeneity from homogeneity of the demand set, the
variation in the value of both the maximum and the average C. KEY OBSERVATIONS
link utilization reduces. For the highest level of heterogeneity • When the weight factor associated with bandwidth cost
in demand set, this variation is the lowest. minimization (α) is the highest, the demand is satisfied
For both maximum and average link utilization, h1 (high- by the demand’s own region only (assuming sufficient
est level of heterogeneity) does not have room for large capacity is available). Thus, the lowest bandwidth cost is
variations whereas for h5 (homogeneous demand set), we ensured. As we continue to decrease α compared to the
notice the highest level of transition. h1 needs to use upper other weight factors, the demand starts to be distributed.
layers even for the lower value of β. However, at the lower Initially, it goes to the nearby region of layer-1 fog,
level of heterogeneity, the demand is initially satisfied by the then, layer-2 fog, and finally, to the cloud. Thus, with
own region of layer-1 fog for the lower value of β. Then, as the lowest value of α, the demand is distributed to
we increase β, the load starts to be distributed among all fog all possible destinations, which results in the highest
layers and cloud. Therefore, the larger variation in the value bandwidth cost. This illustrates how the cooperative
of the maximum and the average link utilization is observed three layer fog-cloud computing system works.
for lower levels of heterogeneity. • When the weight factor to prioritize link level load
balancing (β) is relatively the highest, the load is dis-
3) Case-3 (Maximum Server Resource Utilization and tributed and with the lowest value of β, the demand is
Average Number of Servers Used) satisfied in its own region.
Fig. 13 shows γ versus the maximum server resource utiliza- • As we increase the value of β, the maximum link
tion. As γ increases, the maximum server resource utilization utilization decreases while the average link utilization
decreases. When γ increases, i.e., priority is given on the increases. This indicates, with the increase in β, the use
minimization of maximum server utilization, this forces the of alternate routes increases.
loads to be distributed among different layers and thereafter • The maximum server resource utilization decreases and
the maximum server resource utilization decreases. For re- the average number of server used increases, with the
source demand set r1 (when all the cluster points have same increase in γ.
resource demand), the maximum resource utilization reaches • For the heterogeneous resource demand: as the level
the lowest value with the lower value of γ as compared to of heterogeneity increases, the variation in range of
the resource demand r5 (when one cluster point have a large minimum and maximum value of the bandwidth cost
amount of resource demand than other cluster points). with α, the maximum link utilization and the average
12 VOLUME 4, 2016
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2020.3003263, IEEE Access
link utilization with β decreases. [2] “Internet of things - number of connected devices worldwide 2015-
• For the heterogeneous resource demand, as the level of 2025.” [Online]. Available: https://www.statista.com/statistics/471264/iot-
number-of-connected-devices-worldwide/
heterogeneity increases, the maximum server resource [3] O. Osanaiye, S. Chen, Z. Yan, R. Lu, K.-K. R. Choo, and M. Dlodlo, “From
utilization reaches the lowest value and the average cloud to fog computing: A review and a conceptual live vm migration
number of servers used reaches the highest value for framework,” IEEE Access, vol. 5, pp. 8284–8300, 2017.
[4] E. M. Tordera, X. Masip-Bruin, J. Garcia-Alminana, A. Jukan, G.-J. Ren,
lower value of γ. J. Zhu, and J. Farré, “What is a fog node a tutorial on current concepts
towards a common definition,” arXiv preprint arXiv:1611.09193, 2016.
[5] M. Zahid, N. Javaid, K. Ansar, K. Hassan, M. K. Khan, and M. Waqas,
V. CONCLUSION “Hill climbing load balancing algorithm on fog computing,” in Interna-
In this paper, minimization of bandwidth cost and efficient tional Conference on P2P, Parallel, Grid, Cloud and Internet Computing.
Springer, 2018, pp. 238–251.
resource management in a cooperative three-layer fog-clod [6] X. He, Z. Ren, C. Shi, and J. Fang, “A novel load balancing strategy of
computing environment is studied. We first presented a novel software-defined cloud/fog networking in the internet of vehicles,” China
MILP optimization formulation in the three-layer fog-cloud Communications, vol. 13, no. 2, pp. 140–149, 2016.
[7] A. J. Kadhim and S. A. H. Seno, “Maximizing the utilization of fog com-
computing environment. We then investigated several sce-
puting in internet of vehicle using sdn,” IEEE Communications Letters,
narios to evaluate how efficient utilization of network and vol. 23, no. 1, pp. 140–143, 2018.
server resources can be ensured in such an environment by [8] R. Buyya and S. N. Srirama, Fog and edge computing: principles and
leveraging SDN. Both homogeneous and heterogeneous net- paradigms. John Wiley & Sons, 2019.
[9] Y. Jiang, Z. Huang, and D. H. Tsang, “Challenges and solutions in fog
work and server resource demands generation from CPs are computing orchestration,” IEEE Network, vol. 32, no. 3, pp. 122–129,
considered. Then, the variation in performance is analyzed in 2017.
terms of bandwidth cost, links’ and servers’ utilization, and [10] R. K. Naha, S. Garg, and A. Chan, “Fog computing architecture: Survey
and challenges,” arXiv preprint arXiv:1811.09047, 2018.
the number of servers used. To the best of our knowledge, [11] Y. Liu, J. E. Fieldsend, and G. Min, “A framework of fog computing:
no other work has considered the bandwidth cost minimiza- Architecture, challenges, and optimization,” IEEE Access, vol. 5, pp.
tion and link and server level load balancing jointly in a 25 445–25 454, 2017.
[12] S. Yi, C. Li, and Q. Li, “A survey of fog computing: concepts, applications
cooperative three-layer fog-cloud computing environment. and issues,” in Proceedings of the 2015 workshop on mobile big data.
Furthermore, by tuning three weight factors associated in ACM, 2015, pp. 37–42.
the composite objective function, i.e., bandwidth cost, link [13] M. Mukherjee, R. Matam, L. Shu, L. Maglaras, M. A. Ferrag, N. Choud-
level utilization, and server resource utilization, the priority hury, and V. Kumar, “Security and privacy in fog computing: Challenges,”
IEEE Access, vol. 5, pp. 19 293–19 304, 2017.
level can be controlled. This gives the service provider to [14] F. A. Kraemer, A. E. Braten, N. Tamkittikhun, and D. Palma, “Fog
choose which component of the objective function should computing in healthcare–a review and discussion,” IEEE Access, vol. 5,
get more priority based on the situation in the network. Our pp. 9206–9222, 2017.
[15] A. Kumari, S. Tanwar, S. Tyagi, and N. Kumar, “Fog computing for
designed framework also provides the opportunity to change healthcare 4.0 environment: Opportunities and challenges,” Computers &
this priority level time to time due to the temporal and spatial Electrical Engineering, vol. 72, pp. 1–13, 2018.
variation in requirements. Thereafter, this work can help [16] Y. Chen, E. Sun, and Y. Zhang, “Joint optimization of transmission
and processing delay in fog computing access networks,” in 2017 9th
the fog service providers to allocate the limited resources International Conference on Advanced Infocomm Technology (ICAIT).
effectively. Furthermore, our optimization model can be used IEEE, 2017, pp. 155–158.
as an important benchmark to compare the performance of [17] X. Meng, W. Wang, and Z. Zhang, “Delay-constrained hybrid computation
offloading with cloud and fog computing,” IEEE Access, vol. 5, pp.
any fast solution heuristic. Also, this model can be used to 21 355–21 367, 2017.
evaluate any fog computing topology. [18] H. A. Khattak, H. Arshad, S. ul Islam, G. Ahmed, S. Jabbar, A. M.
In future, we plan to extend this work in a number of ways. Sharif, and S. Khalid, “Utilization and load balancing in fog servers for
health applications,” EURASIP Journal on Wireless Communications and
We plan to study this framework in large scale using a heuris- Networking, vol. 2019, no. 1, p. 91, 2019.
tic that can be used in real time. We also want to incorporate [19] X. Xu, S. Fu, Q. Cai, W. Tian, W. Liu, W. Dou, X. Sun, and A. X. Liu,
server and network consolidation using virtualization tech- “Dynamic resource allocation for load balancing in fog environment,”
Wireless Communications and Mobile Computing, vol. 2018, 2018.
niques in a dynamic traffic environment to achieve further im- [20] N. Mostafa, “Cooperative fog communications using a multi-level load
provement in resource provisioning. A priority based request balancing,” in 2019 Fourth International Conference on Fog and Mobile
processing and resource allocation can also be studied. This Edge Computing (FMEC). IEEE, 2019, pp. 45–51.
[21] Y. Dong, C. Han, and S. Guo, “Joint optimization of energy and qoe with
can help to understand if the requests generated from one CP fairness in cooperative fog computing system,” in 2018 IEEE International
gets priority, how much improvement is achieved in service Conference on Networking, Architecture and Storage (NAS). IEEE,
provisioning of prioritized CP and how other CPs are affected 2018, pp. 1–4.
in this process. We also plan to investigate how the QoS [22] V. Mushunuri, A. Kattepur, H. K. Rath, and A. Simha, “Resource opti-
mization in fog enabled iot deployments,” in 2017 Second International
can be ensured through traffic classification and therefore, a Conference on Fog and Mobile Edge Computing (FMEC). IEEE, 2017,
study can be conducted considering throughput guarantee as pp. 6–13.
a requirement for delay sensitive services. [23] R. Deng, R. Lu, C. Lai, T. H. Luan, and H. Liang, “Optimal workload
allocation in fog-cloud computing toward balanced delay and power
consumption,” IEEE internet of things journal, vol. 3, no. 6, pp. 1171–
1181, 2016.
REFERENCES
[24] S. Ningning, G. Chao, A. Xingshuo, and Z. Qiang, “Fog computing
[1] P. Middleton, P. Kjeldsen, and J. Tully, “Forecast: The internet of things, dynamic load balancing mechanism based on graph repartitioning,” China
worldwide, 2013,” Gartner Research, 2013. Communications, vol. 13, no. 3, pp. 156–164, 2016.
VOLUME 4, 2016 13
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2020.3003263, IEEE Access
[25] H. Wadhwa and R. Aron, “Fog computing with the integration of internet
of things: Architecture, applications and future directions,” in 2018 IEEE
Intl Conf on Parallel & Distributed Processing with Applications, Ubiqui-
tous Computing & Communications, Big Data & Cloud Computing, Social
Computing & Networking, Sustainable Computing & Communications
(ISPA/IUCC/BDCloud/SocialCom/SustainCom). IEEE, 2018, pp. 987–
994.
[26] C. Wöbker, A. Seitz, H. Mueller, and B. Bruegge, “Fogernetes: De-
ployment and management of fog computing applications,” in NOMS
2018-2018 IEEE/IFIP Network Operations and Management Symposium.
IEEE, 2018, pp. 1–7.
[27] A. Brogi and S. Forti, “Qos-aware deployment of iot applications through
the fog,” IEEE Internet of Things Journal, vol. 4, no. 5, pp. 1185–1192,
2017.
[28] S. Yi, Z. Hao, Z. Qin, and Q. Li, “Fog computing: Platform and applica-
tions,” in 2015 Third IEEE Workshop on Hot Topics in Web Systems and
Technologies (HotWeb). IEEE, 2015, pp. 73–78.
[29] O. Salman, I. Elhajj, A. Chehab, and A. Kayssi, “Iot survey: An sdn and
fog computing perspective,” Computer Networks, vol. 143, pp. 221–246,
2018.
[30] M. Al-Khafajiy, T. Baker, H. Al-Libawy, A. Waraich, C. Chalmers, and
O. Alfandi, “Fog computing framework for internet of things applications,”
in 2018 11th International Conference on Developments in eSystems
Engineering (DeSE). IEEE, 2018, pp. 71–77.
[31] M. M. S. Maswood and D. Medhi, “Optimal connectivity to cloud data
centers,” in 2017 IEEE 6th International Conference on Cloud Networking
(CloudNet). IEEE, 2017, pp. 1–6.
[32] M. M. S. Maswood, C. Develder, E. Madeira, and D. Medhi, “Dynamic
virtual network traffic engineering with energy efficiency in multi-location
data center networks,” in 2016 28th International Teletraffic Congress (ITC
28), vol. 1. IEEE, 2016, pp. 10–17.
14 VOLUME 4, 2016
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/.