Professional Documents
Culture Documents
User Incentive Model and Its Optimization Scheme in User Par 2018 Computer N
User Incentive Model and Its Optimization Scheme in User Par 2018 Computer N
Computer Networks
journal homepage: www.elsevier.com/locate/comnet
a r t i c l e i n f o a b s t r a c t
Article history: Although fog computing is recognized as an alternative computing model to cloud computing for IoT,
Received 12 September 2017 it is not yet widely used. The replacement of network equipment is inevitable to implement fog com-
Revised 7 August 2018
puting; however, the entity in charge of replacement, requiring high cost, is unclear, and also the entity
Accepted 22 August 2018
in charge of operating of the infrastructure is unclear. To solve these feasibility problems, we propose
Available online 24 August 2018
an incentive-based, user-participatory fog computing architecture. In terms of inducement of user par-
Keywords: ticipation on the proposed architecture, first, users are classified into four categories according to their
Internet of things tendencies and conditions, and the types of incentives, which are paid as a compensation for participa-
Fog computing tion, the payment standard, and the operation model are presented in detail. From the perspective of
Software defined networking fog service instance deployment, the instances should be deployed to reasonably minimize the incentives
Fog container placement paid to the participating users of the proposed architecture, which is directly linked to maximizing the
User incentive
profitability of the infrastructure operator, while maintaining the performance. The optimization problem
Optimization
for the instance placement to achieve above design goal is formulated with a mixed-integer nonlinear
programming, and then linearized. The proposed instance placement scheme is compared with several
schemes through simulations based on actual service workload and device power consumption.
© 2018 The Authors. Published by Elsevier B.V.
This is an open access article under the CC BY-NC-ND license.
(http://creativecommons.org/licenses/by-nc-nd/4.0/)
https://doi.org/10.1016/j.comnet.2018.08.011
1389-1286/© 2018 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license. (http://creativecommons.org/licenses/by-nc-nd/4.0/)
W.-S. Kim, S.-H. Chung / Computer Networks 145 (2018) 76–88 77
sources in computing and networking. Nishio et al. [23] proposed sists of four layers, namely a data generator layer, fog layer, cloud
a mathematical framework and architecture for heterogeneous re- layer, and data consumer layer. The data generator layer consists of
source sharing based on service-oriented utility functions, and op- various high reliability low cost sensor nodes and mobile devices,
timized service delay. It is necessary to consider heterogeneous which are widely distributed in public infrastructure to monitor
resources in the development of a fog container. However, the the state change over time. In the fog layer, the fog devices con-
differential contributions of users and the provision of services nect to a local group of sensors to perform data analysis quickly.
as rewards for that were not considered. Hong et al. [24] pro- The devices collecting the data from the generators rapidly respond
posed a simplified programming abstraction called PaaS program- to the data consumers and transfer the data to the cloud layer for
ming model that can orchestrate highly dynamic heterogeneous re- big data analysis. The cloud layer performs big data analysis and
sources at different levels of network hierarchy and support low operations such as large-scale event detection, long-term pattern
latency and scalability. recognition, and relationship modeling. Finally, the data consumers
In addition, there are studies on fog device selection schemes consist of a wide range of entities, including individual users, ac-
that can reduce carbon dioxide gas emissions on the green tech- tuators, companies, and research institutions, request and receive
nology side, and studies that allocate resources to preferentially specific categories of sensing data from various layers.
use eco-friendly energy-based data centers [25–27]. Yao et al. The system in the aforementioned study groups the consumer,
[28] proposed a technique to reduce the latency of vehicular cloud the fog access point providing access to the infrastructure, and the
computing by using vehicles and road side units in the vehicle ad- VM, which is the provider operating in all fog devices. A link band-
hoc networks. In particular, in terms of the VM migration, an op- width between them was allocated to process the data requests
timization that ultimately minimizes network cost has been sug- from the consumer. That is, the VM was placed to minimize the
gested. Deng et al. [29] studied the tradeoffs between power con- allocated link bandwidth between the consumer and the closest
sumption and delay in terms of interplay and cooperation between fog access point, and between the fog access point and the fog de-
cloud computing and fog computing. These studies have analyzed vice on which the VM with the data desired by the consumer was
key elements to fog computing through an in-depth approach, but located. However, this study considered only specific services, such
assumed that the fog device was already widely deployed and all as crowd sensing, and assumed that fog access points that provide
its resources can be controlled overall. In other words, the opti- access to the infrastructure with the desired data were densely lo-
mization of fog container placement was described entirely from cated throughout the city. Further, it has the limitation of not con-
a network point of view, which is not suitable for the user-driven sidering the main resources of fog computing such as the workload
fog infrastructure construction model presented in this paper. and storage.
There are many researches on fog computing operation archi- The fog container placement is similar to VM deployment in
tecture [30]. Among them, OpenFog is aiming for an open fog cloud computing from a resource utilization perspective. There are
computing architecture and is developing its reference architec- a number of studies on VM deployment schemes to reduce power
ture to ensure a complete interoperability and security system [31]. consumption in the cloud data center. Gupta et al. [36] proposed
Masip-Bruin et al. [32] has suggested a hierarchical structure in a scheme to reduce resource waste, such as idle memory increase,
which fog and cloud concepts are mixed. The mixed concept can when VMs with high CPU usage were deployed on single physi-
be built through traditional clouds, intermediate cloud/fog systems, cal machines (PMs). This scheme considered the usage of the CPU,
and edge fog devices. The different fogs and clouds are defined as memory of each VM, and computing capacity of the PM. Moreover,
layers setting a hierarchical architecture where the service running this scheme normalized the resource usage by grouping the VMs to
on the users’ device can decide the best suited fog/cloud resource be deployed, and minimized the number of PMs through the de-
on the fly. However, as described above, the difference between the ployment of the VM groups. However, since this scheme assumed
existing researches and the proposed model lies in the construc- a cloud environment, there is no consideration of network usage
tion form an emphasis on the feasibility, which means increasing and various types of services and devices.
the possibility of actual implementation. In this paper, we propose the incentive management method
Participation inducement by user incentives is a policy used in that work with the user-participatory fog computing architecture
a wide variety of areas. The incentive in named data networking that is our previous study [37]. Unlike the basic operations of the
works as efficiently as BitTorrent [33]. However, there is a problem previous study, the participating users receive incentives based on
that attention should be paid to the incentive computation over- the amount of shared resources, and the fog manager deploys con-
head rather than positive effect depending on the protocol speci- tainers to minimize incentives reasonably. In the simulation, we
ficity. Li et al. [34] proposed a technique for introducing user in- compare the placement scheme that minimizes power consump-
centives to the cellular offloading system. The incentive paid to the tion, the scheme that considers only the usage of the users, and
users in the congestion area to compensate for degraded experi- the proposed optimization scheme, in terms of sharing incentives.
ence, and it also used to maximize the profit of the operator and to
perform network scheduling. As with these utilization methods, in-
centives can be used for a variety of purposes, sometimes to max- 3. User incentives in user-participatory fog computing
imize the profitability of the operator.
The main research area of fog computing is fog container place- 3.1. User-participatory fog computing architecture
ment [2,5,11]. The container placement comprehensively means
placing a fog server instance, in the form of a Docker container The realization of fog computing can be achieved through col-
or a VM, in a suitable fog device for the purpose of performance laboration between cloud operators, IoT service operators, network
enhancement and the like. Container placement optimization is a infrastructure providers, and switch vendors. However, in this ex-
type of bin packing problem that places containers with individual isting realization model, when a network service operator desires
characteristics into a fog device with limited resources to achieve to provide a fog service in a specific area, the following parame-
specific goals such as minimizing power consumption. ters are unclear: the method of determining the specific area, the
Among the studies related to container placement, there is con- source of traffic usage information, the device of the specific ven-
tainer placement and mapping optimization study to minimize dor to be used, the network infrastructure of the specific provider
power consumption of hierarchical fog computing architecture for to be used, and the entity responsible for routing traffic to the fog
specific services such as crowd sensing [35]. The architecture con- container.
80 W.-S. Kim, S.-H. Chung / Computer Networks 145 (2018) 76–88
Implement,
test and debug
Cloud fog containers
(Docker images) Fog Container
Data (Docker Image)
Center Service on his own VM
Operator
Fog Portal
Fog Container
Admission by
Mediation of the
Fog Portal
Internet
Individual Users or
Network Administrators
(The owner of devices)
Fog
Manager Fog Fog
Switch containers
Cooperation for Device
Fog Computing Local Network Attachment
SDN Fog Fog
Campus, Building, WiFi AP containers
Controller
City, or Town
Fig. 2. Conceptual diagram of the proposed incentive-based, user-participatory fog computing architecture.
In this paper, we propose the incentive-based, user- fog portal. If it is determined that the fog service is required, the
participatory fog computing architecture based on the fog portal service developer implements the fog container in the form of a
to address these parameters. Fig. 2 shows the conceptual diagram Docker image, and the implemented fog container is deployed by
of the proposed architecture. The fog portal is a server located on the fog portal. More detailed basic operations of this model have
the internet, and it performs resource mediation between users been presented in our previous work [37].
and network services. The fog container placement is handled by
the fog manager within the local network, and a fog device, such 3.2. User incentives
as a switch, hub, Wi-Fi AP, and IoT GW, is installed directly by the
user who wants to benefit from the desired fog services. The user The ideal goal of users participating in the fog infrastructure
connects the device to the local network and registers the device construction is to improve the performance of their main network
information in the fog portal. By purchasing and registering a fog services through fog computing. With this goal, the user purchases
device, the user is participating in infrastructure construction and and installs a fog device to improve performance through the fog
is called a participating user. service, or to use a service that operates based on fog computing
The fog manager is located within the local network and moni- only. For example, a user may declare a specific smart home ser-
tors and controls the computing resources of the fog devices in the vice to the fog portal by installing and registering a Wi-Fi AP as
network. That is, the fog manager tracks the resource usage of all a fog device. In this case, the device operates as the AP and the
containers in the network, maintains the available resource infor- smart home hub. Similarly, a user in home can lease the comput-
mation of each device, and synchronizes the information with the ing resources of a fog device in the home to smoothly run smart-
portal. In other words, similar to the SDN controller, the fog man- phone 3D games that require high computing capacity. The fog de-
ager is a local entity that has control over computing resources in vice can be operated as dedicated equipment for network services,
the network. such as healthcare, telemedicine, video security, and power mea-
The fog portal analyzes and processes detailed statistical infor- surement services.
mation gathered from the SDN controller, and provides such in- Ideally, the user participates in expanding the fog infrastructure
formation to the appropriate network service operator. The service to use fog services, but this goal may be limited to increasing the
operator determines the necessity of the fog service based on the likelihood of participation. Compared with the existing realization
information received through the user interface provided by the model, the proposed model has an additional operation whereby
W.-S. Kim, S.-H. Chung / Computer Networks 145 (2018) 76–88 81
the fog portal acquires the operating authority for the fog device portal is required to pay part of the net income to the participating
in order to mediate resources. This could cause the user to believe users as the PI. The participating users can install the fog devices
that their device may experience disadvantages such as operation with various performance levels and determine policies regarding
at unexpected points, exposure to security threats, and excessive the level of computing resources of the device to be shared. The
power usage. This leads to a decrease in the participation rate. profitability of the portal differs considerably according to the re-
Therefore, appropriate incentives must be paid to induce the users source sharing policy of the users. If the user sets the policy to
to delegate the operating authority of their devices to the portal. share all resources, the portal can put more network services into
In other words, the incentives are an indispensable component of the network and increase the profitability. In addition, flexibility
user participation. and resilience are increased for operations such as container place-
Fog computing is an intertwined architecture of various busi- ment. In contrast, if it is set not to lease the resources that exceed
ness operators; thus, the incentive payment relationships can be- the usage of the service used by the user, there may be insufficient
come complex. Fig. 3 shows the kinds of incentives paid to the resources to put the network service into the network, and the
participating user. In this case, an incentive is a payment to a container placement flexibility becomes comparatively low. Thus,
value provider from a beneficiary, made in various ways. Since the PIs provided by the fog portal are paid in proportion to the
the proposed architecture is user-participatory, it is necessary to amount of shared resources of the participating users, not the per-
describe the incentives from the perspective of the participating formance of the device.
users. In other words, for incentives, the giving, receiving enti- In the proposed model, users are classified into four categories.
ties and the payment criteria need to be defined. In the proposed The categories consist of a non-participating user (NU), participat-
model, the incentives are paid to the participating users from the ing user using excessive resources (EPU), private participating user
non-participating users, the participating users using resources ex- (PPU), and sharing participating user (SPU). It is possible to distin-
cessive, ISPs, cloud operators, and the fog portal, based on each guish NUs from participating users depending on whether or not
criteria. their fog device is registered. Participating users are identified by
The types of incentive can be classified as a participation in- whether they are using more resources than they have contributed.
centive (PI) and a sharing incentive (SI). PI is the incentive paid in The users who use excessive resource are EPU, otherwise they are
return for participation in fog infrastructure construction. The ben- general participating users. The general participating users are di-
efits of establishing fog infrastructure are not limited to users. Fog vided into PPU and SPU according to their own sharing policies or
computing has the ability to effectively handle the traffic of net- tendencies. The PPU is the user who does not want to provide sur-
work services to be processed through the core network, within plus resources to other users, and the SPU is the user who wants
the local network. This core feature can significantly reduce uti- to lease resources to receive the SIs. In other words, PPUs do not
lization of the cloud data center and the network infrastructure, want any additional power consumption above their own usage,
leading to lower operating expenses (OPEX). However, it is very and SPUs want to rent the resource and obtain the SI regardless
difficult to determine which the participating user has contributed of power consumption. The user must bear the power cost for the
to the OPEX savings of the business operators. To determine the power consumed at his/her device, and, of course, the SI should
contribution, the following should be considered for each partici- be higher than the additional power usage fee resulting from the
pating user: performance of the fog device provided by the partici- workload used by other users. Because electricity bills for the same
pating user, resource usage of the user, the network traffic reduced power consumption vary according to local laws, it is assumed that
by the user device, and the characteristics of the network services the participating users of the proposed system will set the resource
used by the user. The container placement periodically performed sharing policy to “do not share” if the additional rate is higher than
by the fog manager further increases the difficulty of the decision. the SI for excess workloads.
Therefore, the ideal incentive payment criteria for infrastructure In addition to the PI, SI is also essential for operational re-
operators is the even distribution of part of the OPEX savings to silience and user engagement. In particular, it is essential to pro-
all participating users, regardless of device performance or usage. vide appropriate incentives to the participating users to support
The fog portal generates revenue from the network services in NUs and EPUs. When the NU wants to use the fog service, he/she
the form of monthly fees, and the participating users have a signif- pays a fee to the fog portal in proportion to the usage amount.
icant stake in the revenue, similar to crowdfunding. Therefore, the Likewise, when the EPU uses the device resources of other partic-
82 W.-S. Kim, S.-H. Chung / Computer Networks 145 (2018) 76–88
Table 1
Summary of user incentives.
PI ISPs Participation – O O O
PI Cloud operators Participation – O O O
PI Fog portal Resource provided with – Available resource User’s resource Available resource (Total
the infrastructure usage device resource)
SI Other users Leased resource Pay the fees Pay the fees – Excess usage according to
the container placement
Table 2 are high, the number of NUs or EPUs and the profit of the SPUs de-
Parameters of system model.
crease. If they are low, the expectation of incentives and the pos-
sibility of user participation decrease. As such, the determination
Symbol Description of the base incentive for a particular unit of resource usage estab-
U The set of users in the network lishes a tradeoff relationship. Obtaining appropriate base incentives
S The set of services in the network and fees for leased resource units is beyond the scope of this study.
D The set of fog devices in the network
T The matrix of traffic related to services from users
4. Fog container placement to minimize sharing incentives
β usd The selection variable, which indicates whether a container of a
service s for a user u is on a device d or not
τus The traffic generated by a user u for a service s in a unit time 4.1. Considerations
Wd The workload capacity of a device d
wsa The workload per traffic of the application container of a service s There are several considerations in the placement of the fog
wv The workload per traffic of a VNF container
Nvs The number of VNFs of a service s
container by the fog manager. The first consideration is that even
Ncd The number of fog containers in a device d if the container is placed somewhere in the local network, the ef-
εdi The power efficiency of a device d for loading a container including fect of the quality of service (QoS) and user experience (UX) is not
the idle state significant. The most important features of fog computing that en-
εda The power efficiency of a device d for operating a container in the
hance UX are user location consideration and the reduction in la-
active state
εdn The power efficiency of a device d for transmitting data tency. A local network with a 16–24 bit subnet prefix is not ge-
ed The power consumption of a device d ographically broad, unlike a WAN or the internet. This means that
eˆd The owner power consumption of a device d the approximate location of the user can be determined, regardless
of the container placement in the local network. The other con-
sideration is that the round-trip time in the local network is not
long. The latency of the internet ranges from a few tens of millisec-
ipating users, he/she pays a fee to the portal. The portal pays the onds to a few seconds, where the latter can be perceived by users,
collected fees to participating users who have devices that provide whereas the latency of the local network is only a few millisec-
extra resources; the paid SIs are minimized by container placement onds. This means that even if the container is placed on the device
optimization, which is described later in the paper. In other words, farthest from the user in the local network topology, the user only
the SIs are paid with the collected fees from the NUs and EPUs needs to wait a few milliseconds to use the service, which is not
based on the leased computing resources. critical. Therefore, we have limited the container placement to be
Table 1 summarizes the incentive payment criteria for each user performed in only one local network, and interworking with neigh-
category. The NUs are not entitled to the PI and pay a fee to the boring networks will be covered in future work.
fog portal as they use the service. The EPUs can receive the PIs, Second, the network services are configured to support scala-
but they use a portion of resources of the SPUs and pay a fee for bility through flexible resource management within each local net-
overuse. The PPUs receive only the PIs, and they receive a revenue work. In the network, each service can operate across multiple de-
distribution from the fog portal, which is proportional to their av- vices, and flexible resource management is achieved based on the
erage resource usage. The SPUs can receive all incentives, and in execution of multiple identical containers. A container that pro-
the case of the PI, the SPUs can receive all incentives, and in the vides a major function of a service is called an application con-
case of the SI, the incentives are given in proportion to the usage tainer, that is, a service can consist of a plurality of application
of other users when the fog containers for other users operate in containers. In addition, a service can include network functions,
the devices of the SPUs. such as a database, load balancer, and firewall, according to its
The fog portal does not charge commission for resource rental operating policy. Each function is a virtualized network function
between the users, and only receives a profit for the reduced (VNF) instead of separate hardware, and each VNF operates as a
power consumption by optimizing the container placement by the Docker container. Since the VNF containers are closely related to
fog manager. That is, the portal can maximize its income by mini- the application containers and require rapid responses, the VNFs
mizing the total power consumption of the network. of the corresponding services must be operated on all fog devices
An incentive is typically in a monetary form, such as a fee re- in which the application container of a specific service operates.
duction or cash. In the case of the PI with a certain criteria, deter- Third, it is assumed that the participating users seek the SIs
mining this amount does not present any significant problems. The first. Under this assumption, all SPUs want to operate the maxi-
sum of the PIs for the OPEX savings of the infrastructure operators mum number of containers on their devices to obtain SIs while
cannot exceed the OPEX savings, and the profit distribution of the taking risks such as equipment operation cost and noise. The fog
portal cannot exceed the profit. However, in the case of the SI, the portal can maximize its revenues by optimizing container place-
proposed system only defined the resource usage and SIs as pro- ment to reduce overall power consumption while properly posi-
portional. That is, there is no mention of incentive payments for a tioning the containers on the power-efficient devices. Therefore,
particular unit of resource usage, which is directly related to the the fog manager must deploy the fog containers to reduce total
portal revenue and user participation. If the base incentive and fee power consumption and to minimize the amount of SIs.
W.-S. Kim, S.-H. Chung / Computer Networks 145 (2018) 76–88 83
4.2. System model work, considering the service usage by the users. The power con-
sumption of each device can be defined as follows:
The optimization design goal is to minimize the SIs. The min-
ed = eid + ead + end , ∀d ∈ D (5)
imization of the SIs is directly linked to the maximization of the
revenue of the portal, which is achieved by minimizing the power The power consumption per unit time of the device is the sum
consumption of the workload while maintaining network usage at of the base power consumption of the containers, power consump-
such a level that congestion does not occur. The participating users tion of the containers in the active state, and power consumption
connect various types of fog devices, and the power efficiency, stor- of networking. First, to formulate the power consumed by the idle
age capacity, and acceptable workload of each device differ. state containers in the device, we need to calculate the number of
The fog service operates in a container form across multiple containers in each device, as follows:
fog devices in the network. Each container consumes the work-
load for processing each request from the users across several de-
Ncd = min 1, βusd Nvs + βusd , ∀d ∈ D (6)
vices. A single request from one user does not have to be processed
s∈S u∈U s∈S u∈U
by multiple containers; thus, the single service request from the
user is directed to a specific container in a certain device. In other The number of application containers in the device is equivalent
words, the user can use several services, but the traffic for the sin- to the number of users using the service operated by the device.
gle service of the user is processed in only one container. The se- The VNF container exists in the number of VNFs of each service
lection variable, which indicates that the traffic for service s gener- operated in the corresponding device.
ated by user u is directed to device d, can be expressed as: The power consumption of the idle state containers can now be
defined as follows:
βusd ∈ {0, 1}, ∀u ∈ U, s ∈ S, d ∈ D (1)
eid = εdi Ncd , ∀d ∈ D (7)
If β usd is set to 1, the traffic related to service s from user u Note that the device type depends on the user selection, so the
should be sent to device d; if β usd is zero, it should not be sent. power efficiency of the installed device differs. Moreover, the num-
The workload refers to the average amount of the CPU, mem- ber of application containers depends on the number of requests
ory, and I/O work that a particular container uses per unit time. by the users. In addition, the power consumed by the active mode
In this paper, we express it as a normalized value to avoid exces- containers and the power consumption for networking can be de-
sive complexity. Other studies represent the resources used by pro- fined, respectively, as follows:
cesses as workloads [18]. As described above, a service consists of
VNF containers used by a service and the application containers ead = εda βusd τus (Nvs wv + wsa ), ∀d ∈ D (8)
in one device. The workload of an application container differs ac- s∈S u∈U
cording to the type of service. For example, if a service only stores
data, there is a low workload; if a service performs CPU-intensive end = εdn βusd τus , ∀d ∈ D (9)
s∈S u∈U
real-time video processing, the workload will be quite high. There-
fore, the workload of an application container is different for each If there is no control by the fog manager, the users run the con-
service and can be expressed as a constant value wsa according to tainers of the services, which they use, on their device. The owner
the network input data amount. In contrast, in the case of a VNF power consumption of device d can be expressed as follows:
container, the workload for input data amount can be represented
by a constant value wv since there is no significant difference be-
eˆd = eˆid + eˆad + eˆnd , ∀d ∈ D (10)
tween the services. As in (5), this consists of three kinds of the power consumption.
The traffic element τus represents the average traffic per unit However, the selection constant βˆusd used to calculate the owner
time that user u communicates with the container of service s in power consumption is set in advance so that the containers of the
the network. The selection variable should not be set to zero for services used by the users operate within their own device. There-
a particular user who generates traffic for a particular service. In fore, the components of (10) are defined as follows:
addition, a certain service, which communicates with the user, op-
erates the application container on only one device; thus, the fol- eˆid = εdi βˆusd (Nvs + 1), ∀d ∈ D (11)
s∈S u∈U
lowing constraints are defined:
τus eˆad = εda βˆusd τus (Nvs wv + wsa ), ∀d ∈ D (12)
≤ β , ∀u ∈ U, s ∈ S (2)
T d∈D usd s∈S u∈U
eˆnd = εdn βˆusd τus , ∀d ∈ D (13)
βusd ≤ 1, ∀u ∈ U, s ∈ S (3) s∈S u∈U
container placement process. However, it may be more reasonable λ1u , λ2u ∈ {0, 1}, ∀{u, d} ∈ U × D (26)
to include this device in the process, but to place the containers
so that it does not exceed the owner power consumption of this The optimization problem (17) can be redefined as an integer
user. To take this into account, tolerance constant κ u is defined as linear programming problem as follows:
follows: min θu
u∈U (27)
∞, if u ∈ USPU s.t. (1 ) − (4 ), (16 ), (19 ) − (26 )
κu = , ∀u ∈ U (15)
1, if u ∈ UPPU
4.4. Examples of container placement
The relationship between the owner power consumption and
the actual power consumption based on the tolerance constant can
Table 3 shows three examples of container placement. In the
be defined as the following constraints:
examples, there are three SPUs and two NUs, and the service us-
ed ≤ κu eˆd + m , ∀κu ∈ [1, ∞], m > 0, {u, d} ∈ U × D (16) age and workload capacity of the registered device is shown in the
table. Optimization 1 and 2 do not consider the SI. Optimization 1
m is a very small positive number greater than zero; it was added deployed the container to minimize power consumption, the op-
to define this relationship when the owner power consumption of timization 2 deployed the container so that the user usage and
the SPU is zero. That is, if the tolerance constant of the PPU is set power consumption were proportional to a certain degree, and op-
to one, the containers are not placed on his/her device excessively timization 3 deployed the container to minimize the SI.
so that the SI will not occur. In addition, the tolerance constant of The total resource usage of the optimization 1 result is the low-
the SPUs is infinite to reflect the tendency to want to place the est compared to the other optimization results. This means that
maximum number of containers on his/her device. the system used the lowest power compared to the other opti-
In the case of the EPU, there are no additional constraints to be mizations to handle the same workload, indicating that the con-
defined. They are treated as the users who use network services tainer placement reduced the overall power consumption in the
that cannot be processed by using all of the available workloads network. According to the result of optimization 1, the user C de-
of their devices. In other words, the owner power consumption is vice with the best power efficiency consumed 90 W; thus, the 90
always larger than the actual power consumption of the device of SIs should be paid to user C, while a total of 30 fees was collected
the user. The fees for the excess usage is computed separately from from users D and E. As a result, the fog portal paid additional SIs
the objective function associated with the SI and therefore need in the network to incur losses. It should be noted that users A and
not be considered in the optimization problem. NUs can be treated B did not pay the usage fee, despite having higher service usages
the same as other participating users, but the workload capacity of than the power consumption of their devices. This is merely an in-
the device of the user is always zero. dividual benefit from the container placement, so there is no basis
The optimization problem is mathematically formulated as fol- for collecting fees from the users concerned. As such, if the con-
lows: tainers are placed to minimize power consumption, there may be
min θu a situation in which the fog portal suffers losses.
u∈U (17) Optimization 2 minimized the power consumption while main-
s.t. (1 ), (2 ), (3 ), (4 ), (16 ) taining it a certain proportion of the user service usage. Although
the total power consumption increased compared to that of opti-
Eq. (17) is a mixed-integer non-linear programming (MINLP)
mization 1, the portal did not suffer losses from the SI manage-
problem, requiring variable relaxation [38]. The objective function
ment. This is because it considered the service usage of the SPU,
has two minimization functions, and it should be linearized. For-
and thus the SIs to be paid to the SPUs reduced significantly. How-
tunately, the linearization is simply achieved by using an auxiliary
ever, no container is operated on the user C device with the best
variable and the big-M. The functions are linearized as follows:
power efficiency because user C did not use any services. Opti-
mization 3 placed the container to minimize the SI. The portal
δsd = min 1, βusd , ∀s ∈ S, d ∈ D (18) benefits from the SI management because the collection fees are
u∈U higher than the SIs.
Table 3
Examples of three container placement results.
User User category Owner power Maximum device Provided power Optimization 1 Optimization 2 Optimization 3
consumption (W) power PC SI PC SI PC SI
A SPU 30 80 50 0 0 20 0 30 0
B SPU 60 100 40 0 0 90 30 60 0
C SPU 0 100 100 90 90 0 0 25 25
D NU 10 0 −10 0 −10 0 −10 0 −10
E NU 20 0 −20 0 −20 0 −20 0 −20
Total 120 280 160 90 60 110 0 115 −5
Table 4
Simulation parameters.
100 sumption considering owner usage, and the minimizing sharing in-
90 MPC centive (MSI) is the scheme for minimizing the amount of the SI
Total Incentives to be Paid
MPU
80 payment. The tolerance constant of the MPU was fixed at 1.2, and
MSI
70 this scheme places the container in such a way that it does not
60 convey unreasonable power consumption to the user when there
50 is no SI. In the simulation, there is no PPU, so when the number
40
of NUs increases from zero to nine, the number of SPUs decreases
from ten to one. In addition, the total workload to requested work-
30
load ratio was set at 20% and 50%, respectively.
20
The simulation results in Fig. 4(a) show that the SI payment
10
amount of the proposed MSI scheme is lower than that of the
0
0 1 2 3 4 5 6 7 8 9 other schemes in all cases. According to the results of the MSI
The Number of Non-Participating Users ((a) WR 20%) scheme, the amount of SI payment is increased until the number
160
of NU is increased from zero to seven, and then decreased. The ini-
MPC tial increase is because more workloads need to be processed on
Total Incentives to be Paid
140 MPU the SPU device owing to the increase in the number of NUs. The
120 MSI decrease in the latter is due to the total requested workload be-
100
ing reduced by the decrease in the number of SPUs, while the WR
is fixed at 20%. The MPU scheme provides an additional 11.88 in-
80 centives on average than the MSI until the number of NUs reaches
60 seven. Because the MPU has the tolerance constant of 1.2 consid-
40 ering the owner usage, it places the containers to consume less
power than the SPU owner power consumption for almost all de-
20 vices. In other words, the power consumption due to the workload
0 is added to the device with good power efficiency at a level of less
0 1 2 3 4 5 6 7 8 9
than 120% of the owner usage, and an addition of 20% leads to
The Number of Non-Participating Users ((b) WR 50%)
an increase in the SI payment amount. In contrast, the MSI tries
Fig. 4. Amount of the total SI payment of each container placement scheme ac- to avoid an unnecessary increase in the SI amount by placing the
cording to the number of NUs; (a) WR 20%, (b) WR 50%. containers to consume as much power as the owner power con-
sumption of all SPUs.
The MPC places most containers on one or two power-efficient
cluding the owner power consumption. Negative SIs are not con- devices provided there is workload capacity. In this process, the
sidered for the SPU, and the SIs are not paid for the PPU. Since NU power consumption of most SPU devices is zero; however, the
has zero available workload of its own device, the owner power power consumption of a particular device will rise to its maxi-
consumption is charged as is. mum. Obviously, this leads to a significant increase in the amount
Fig. 4 shows the SI payment amount of each placement scheme of SI payment. The SI payment in the MPC scheme decreases as the
according to the change in the number of NUs while the to- number of SPUs decreases over the entire case, because the total
tal number of users is fixed at ten. In the legend, the mini- request workload is reduced owing to the decrease in the number
mizing power consumption (MPC) is the scheme for minimizing of SPU devices. When there is one SPU, the SI amount of the three
power consumption, the minimizing power consumption consider- optimization results is equal. The result of Fig. 4(b) shows that the
ing owner usage (MPU) is the scheme for minimizing power con-
86 W.-S. Kim, S.-H. Chung / Computer Networks 145 (2018) 76–88
40 140 120
Break-Even Point 104.6711
20 120
Revenue of the fog portal
100
0 100
Collected Fees
Total Incentives
80
-20 80
-40 60 60
-60 40
40
26.6432
-80 20
20
-100 0
0 1 2 3 4 5 6 7 8 9 0
The Number of Non-Participating Users 0
MSI MPU MPC
MPC MPU MSI Fees
Fig. 7. Total sharing incentives paid in each container placement scheme when only
Fig. 5. Comparison of each container placement scheme in terms of the revenue the SPUs exist in the network.
with 20% WR.
70
90 usage MPC
80 MPU MSI
70 50
60 MPC 40
50 MPU
30
MSI
40
20
30
20 10
10 0
0 u1 (NU) u2 (NU) u3 (NU) u4 (NU) u5 (PPU) u6 (PPU)
0 1 2 3 4 5 6 7 Users
The Number of Private Participating Users
70
Fig. 6. Comparison of each container placement scheme in terms of the revenue
Power Consumption (W)
60
according to the number of PPUs.
50
40
total amount of SI is increased compared to that of Fig. 4(a) as the
requested workload is high. The other parts are similar to the sim- 30
ulation results of WR of 20%.
Fig. 5 shows the comparison of the three optimization results 20
in terms of revenue of the portal, with the WR fixed at 20%. The 10
revenue of the portal is the fees from the NUs, represented by a
dashed line in Fig. 5 minus the SI payment in Fig. 4(a). First, in 0
u7 (SPU) u8 (SPU) u9 (SPU) u10 (SPU) u11 (SPU) u12 (SPU)
the case of the MPC scheme, the portal suffers a loss for the entire Users
case. This occurs because the containers are placed with no con-
sideration of the SIs. The MPU scheme only generates revenue for Opt. Schemes MPC SPC MSI
the portal when the number of NUs is from two to five, and in the Total PC (W) 127.46 179.43 194.20
other case it incurs losses. The proposed MSI scheme does not gen-
Incentives 86.45 25.37 17.00
erate any income or expenditure when there is no NU, and the rev-
enue is generated in all cases except where there are eight or nine Fig. 8. Power consumption by user device according to each optimization scheme.
NUs thereafter. The simulation uses the WR, which determines the
workload generated based on the total available workload in the
network. Therefore, if the number of SPUs becomes smaller than participating users, the amount of the SI is not equal when there
the number of NUs, the average service usage of the NUs is also is one SPU. This is because the containers of network services that
reduced, resulting in a loss in the latter half of the simulation. Re- the NUs use can be deployed somewhere in the devices of the
gardless of the available workloads, if all users randomly generate PPUs and SPU.
traffic, the revenue of the portal increases for every scheme, as the Fig. 7 shows the SI payment amount of each optimization
collected fees increases. scheme when there is no NU, EPU, and PPU; in other words, only
Fig. 6 shows the amount of SI payment for each placement the SPUs exist. As a result of simulation in an environment with
scheme as a function of the number of PPUs changed. The total random traffic, the SI payment of the MSI is zero, the MPU is
number of users and the number of NUs was fixed to ten and two, 26.64, and the MPC is 104.67. The amount of SI payment of the
respectively, and the number of PPUs increases from zero to seven. MSI is clearly lower than that of other schemes. This implies that
Therefore, the number of SPUs decreases from eight to one. As the MSI-based container placement does not incur any loss to the
shown in the previous simulation results, as the number of SPUs fog portal when there is no situation in which the NU or the like
decreases, the amount of SI payment in the MPC decreases and the leases resources from other users.
payment amount of the MPU and MSI schemes increases. However, Fig. 8 shows the power consumption per user according to
in contrast to the previous simulation, since the PPU is among the the container placement by each optimization when user usage is
W.-S. Kim, S.-H. Chung / Computer Networks 145 (2018) 76–88 87
fixed. In the simulation, the total number of users is 12, the NU Acknowledgement
is four, and the PPU is two. It shows the owner power consump-
tion, and the power consumption according to the MPC, MPU, and This work was supported by the National Research Foundation
MSI, from the left to right. At this time, the owner power con- of Korea (NRF) grant funded by the Korea government(MSIT) (No.
sumption of the NU is estimated based on the average value of NRF-2018R1D1A1B07049355)
the device specifications of the other participating users because
the calculation cannot be performed as they have no devices. The
MPC intensively placed the containers on the power-efficient de- References
vices of users 10, 11, and 12, regardless of the usage amount of the
[1] F. Bonomi, R. Milito, J. Zhu, S. Addepalli, Fog computing and its role in the
participating users. Since the MPU considers owner usage, the con-
Internet of Things, in: Proc. ACM MCC, 2012, pp. 13–15.
tainers are placed to consume power similar to the owner power [2] S. Yi, C. Li, Q. Li, A survey of fog computing: concepts, applications and issues,
consumption. It should be noted that, although the performance in: Proc. ACM Mobidata, June 2015, pp. 37–42.
of the device of user 12 is quite good, the owner usage of the [3] S. Yi, Z. Hao, Z. Qin, Q. Li, Fog computing: platform and applications, in: Proc.
3rd IEEE Workshop on Hot Topics in Web Systems and Technologies (HotWeb),
user is zero, so that the containers have been not placed in the November 2015, pp. 73–78.
device. [4] M. Yannuzzi, R. Milito, R. Serral-Gracià, D. Montero, M. Nemirovsky, Key ingre-
Finally, as a result of the container placement based on the MSI, dients in an IoT recipe: fog computing, cloud computing, and more Fog Com-
puting, in: Proc. IEEE CAMAD, December 2014, pp. 325–329.
the containers were placed so that only users 10 and 12 exceeded [5] L.M. Vaquero, L. Rodero-Merino, Finding your way in the fog: towards a com-
the owner power consumption. Other containers were placed so prehensive definition of fog computing, ACM SIGCOMM Comput. Commun.
that the power consumption according to the result of the con- Rev. 44 (October (5)) (2014) 27–32.
[6] T.H. Luan et al., Fog computing: focusing on mobile users at the edge, arXiv
tainer placement process in the devices of the remaining users was preprint arXiv:1502.01815v3, March 2016.
as close as possible to the owner power consumption, but did not [7] T.N. Gia, et al., Fog computing in healthcare Internet of Things: a case study
exceed it. As a result of the MPC, MPU, and MSI container place- on ECG feature extraction, in: Proc. IEEE CIT, October 2015, pp. 356–363.
[8] H. Dubey, et al., Fog data: enhancing telehealth big data through fog comput-
ment, the total power consumption of all devices in the network
ing, in: Proc. ASE BD&SI, October 2015, pp. 1–6.
was 127.46, 179.43, 194.20 W, respectively. The total power con- [9] N.B. Truong, G.M. Lee, Y. Ghamri-Doudane, Software defined networking-based
sumption by the MPC was the lowest and that of MSI was the vehicular adhoc network with fog computing, in: Proc IFIP/IEEE IM, May. 2015,
pp. 1202–1207.
highest. In contrast, the SI payment showed 86.45, 23.37, and 17.00,
[10] M. Chiang, T. Zhang, Fog and IoT: an overview of research opportunities, IEEE
respectively, and the collected fees was 56.13. Internet Things J. 3 (December (6)) (2016) 854–864.
[11] A.V. Dastjerdi, R. Buyya, Fog computing: helping the internet of things realize
its potential, Computer 49 (August (8)) (2016) 112–116.
[12] E. Marín-Tordera, X. Masip-Bruin, J. García-Almiñana, A. Jukan, G. Ren, J. Zhu,
6. Conclusions and future work Do we all really know what a fog node is? Current trends towards an open
definition, Comput. Commun. 109 (2017) 117–130 September.
Although fog computing is recognized as a computing model [13] M. Satyanarayanan, P. Bahl, R. Caceres, N. Davies, The case for VM-based
cloudlets in mobile computing, IEEE Pervasive Comput. 8 (October (4)) (2009)
for the IoT era, it is still not widely used. This is attributed pre- 14–23.
dominantly to the uncertainty regarding both the massive replace- [14] F. Büsching, S. Schildt, L. Wolf, DroidCluster: towards smartphone cluster com-
ment of network equipment and the infrastructure operators. In puting – the streets are paved with potential computer clusters, in: Proc. ICD-
CSW, Jun., 2012, pp. 114–117.
this paper, we propose an incentive based, user-participatory fog [15] S. Bitam, A. Mellouk, ITS-cloud: cloud computing for intelligent transportation
computing architecture based on the fog portal and its associ- system, in: Proc. IEEE GLOBECOM, December 2012, pp. 2054–2059.
ated container placement scheme, so as to enhance the feasibility, [16] What is Docker, 2017. [Online]. Available (Accessed 3 September) https://www.
docker.com/what-docker.
performance, and profitability of fog computing. In the proposed [17] S.H. Newaz, Towards realizing the importance of placing fog computing fa-
model, the user connects the purchased fog device to the net- cilities at the central office of a PON, in: Proc. IEEE ICACT, February 2017,
work, e.g., at home or in the office, and registers the device in the pp. 152–157.
[18] A. Lewis, S. Ghosh, N. Tzeng, Run-time energy consumption estimation based
fog portal. The fog portal performs an intermediary role between on workload in server systems, in: Proc. USENIX HotPower, December 2008,
users and network service operators. Participating users are moti- pp. 1–5.
vated by incentives and receive more revenue by providing other [19] D. Economou, S. Rivoire, C. Kozyrakis, P. Ranganathan, Full-system power anal-
ysis and modeling for server environments, in: Proc. IEEE MoBS, June 2006,
users with their device resources. Moreover, the fog portal medi-
pp. 1–8.
ates the sharing incentives between the participating users who [20] S. Yan, M. Peng, W. Wang, User access mode selection in fog computing based
provide the resources and the other users who use the resources. radio access networks, in: Proc. IEEE ICC, May. 2016, pp. 1–6.
The fog manager performs container placement on the local net- [21] M. Peng, S. Yan, K. Zhang, C. Wang, Fog-computing-based radio access net-
works: issues and challenges, IEEE Netw. 30 (July (4)) (2016) 46–53.
work and places the containers to minimize the sharing incentives [22] B. Tang, et al., A hierarchical distributed fog computing architecture for big
paid. The proposed container placement method is compared with data analysis in smart cities, in: Proc. ASE BD&SI, October 2015, pp. 1–6.
the power consumption minimization scheme and the power con- [23] T. Nishio, R. Shinkuma, T. Takahashi, N.B. Mandayam, Service-oriented hetero-
geneous resource sharing for optimizing service latency in mobile cloud, in:
sumption minimization considering the owner usage through sim- Proc. ACM MobileCloud, July 2013, pp. 19–26.
ulation. [24] K. Hong, D. Lillethun, U. Ramachandran, B. Ottenwälder, B. Koldehofe, Mo-
In terms of future work, research into more efficient manage- bile fog: a programming model for large-scale applications on the Internet of
Things, in: Proc. ACM MCC, August 2013, pp. 15–20.
ment techniques is required through collaboration with neighbor- [25] S. Sarkar, S. Chatterjee, S. Misra, Assessment of the suitability of fog comput-
ing local networks, rather than with standalone operations. Net- ing in the context of Internet of Things, IEEE Trans. Cloud Comput. 6 (Jan-
work collaboration for user participatory fog computing could cre- uary/March (1)) (2018) 46–59.
[26] C.T. Do, N.H. Tran, C. Pham, M.G.R. Alam, J.H. Son, C.S. Hong, A proximal algo-
ate additional research challenges, such as routing changes, pro-
rithm for joint resource allocation and minimizing carbon footprint in geo-dis-
tocols between SDN controllers, and information sharing between tributed fog computing, in: Proc. IEEE ICOIN, January 2015, pp. 324–329.
fog managers. In addition, it is necessary to build a more sophis- [27] F. Jalali, K. Hinton, R. Ayre, T. Alpcan, R.S. Tucker, Fog computing may help to
save energy in cloud computing, IEEE J. Sel. Areas Commun. vol. 34 (May (5))
ticated model, in terms of workload, of the power consumption
(2016).
versus workload to improve the efficiency of the power consump- [28] H. Yao, C. Bai, D. Zeng, Q. Liang, Y. Fan, Migrate or not? Exploring virtual ma-
tion prediction. Finally, to mitigate the complexity of the proposed chine migration in roadside cloudlet-based vehicular cloud, Concurr. Computa-
optimization scheme, distributed convex optimization, such as the tion: Pract. Exp. 27 (December (18)) (2015) 5780–5792.
[29] R. Deng, R. Lu, C. Lai, T.H. Luan, Towards power consumption-delay tradeoff
alternating direction method of multipliers (ADMM), needs to be by workload allocation in cloud-fog computing, in: Proc. IEEE ICC, June 2015,
applied [42–43]. pp. 3909–3914.
88 W.-S. Kim, S.-H. Chung / Computer Networks 145 (2018) 76–88