Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

An Energy Saving Algorithm based on UserProvided Resources in Mobile Cloud Computing

Xing Liu1 , Chaowei Yuan1 , Zhen Yang2 , Zhongwei Hu1


School of Information and Communication Engineering, Beijing
University of Posts and Telecommunications, Beijing, China
2
School of Computer Science, Beijing University of Posts and
Telecommunications, Beijing, China. yangzhen@bupt.edu.cn

AbstractIn Mobile Cloud Computing (MCC), the data processing and storage for Mobile Terminals (MTs) will be provided
on the remote cloud. This technology can extend battery lifetime
and increase processing power, but there are several significant
issues such as the problem of dead spots or coverage holes.
However, this problem is ignored in existing energy saving
algorithms and mobile cloud platform design schemes. As a
result, the service delay or disconnection would be caused. To
this end, we address the problem of dead spots or coverage
holes, and propose a energy saving algorithm based on a userprovided resources platform in this paper, called Task Offloading
using Self-organized Criticality (TOSOC). This user-provided
resources platform could provide the processing capacity based
on a cluster of MTs when they are out of service of MCC.
Considering the energy limitation of MTs, we propose a energy
saving algorithm in user-provided resources platform with a
service delay constraint. Numerical results validate the rightness
and effectiveness of our proposed TOSOC, and the energy saving
with requirement of service delay are proven.
Keywords-Mobile Internet; Cloud Computing; Mobile Cloud
Computing; Self-organized Criticality

I. I NTRODUCTION
Mobile Cloud Computing (MCC) is regarded as an emerging research area which could combine Cloud Computing with
Mobile Internet. According to MCC, the cloud can process the
data for Mobile Terminals (MTs), and thus the battery lifetime
of MTs could be extended meanwhile the higher running speed
[1]. Obviously, the cloud is a key role in this technique.
However, wireless connection is not stable and dead spots
or coverage holes usually caused due to disconnection (i.e.,
WLAN and 3G are not available). In this situation, the service
for MTs from cloud cannot provide any more, and this is a
significant issue should be addressed [2]. Straightforwardly,
this problem can be easily tackled by deploying additional
base stations. However, such a solution might be an ineffective
solution with the high overhead in several scenarios.
Therefore, relay node assignment is a cost-effective alternative which could act as MAC-layer repeaters to extend the
range of the base station [3]. On a one hand, employing
relay nodes is a low-cost option to fill coverage holes and
extend range in many scenarios. On the other hand, multihop scheduling, delay jitter and the higher probability of
3 The work was supported by the National Science Foundation of China
under Grant 61173017.

Fig. 1: An example of user-provided resources platform

package loss are incurred [4]. For the delay-sensitive and


real-time applications, the above problems may cause the
long service delay or higher outage probability. And thus,
employing relay node might not reach the performance as
expected. Moreover, the problem of dead spots or coverage
holes is very important but usually ignored by existing mobile
cloud platform design schemes [5][6][7][8]. In these platforms,
the mobile users are just pure consumers, their local resources
such as the computing capacity have been ignored which could
be regarded as backup when cloud is disconnection.
Inspired by references [9], which consider that customers
could contribute their own idle computing resources can be
used as a complement of great potentials to data center-based
clouds, we propose a user-provided resources platform for
improving the quality of service (QoS) of MCC, considering
disconnected service of cloud.
We illustrate the ignored impact of this corresponding
problem with an example shown in Fig.1. In Fig.1, there are
two Base Stations A and B, the cloud can provide service
for the MTs in the cover range of A and B. When A cannot
work, the MT c can handover to B, but MTs a and b are
out of service. In order to resolve this situation, the existing
work prefer to relay technique to forwarding the tasks of a
and b from c to B (i.e., abcBcloud). Indeed, the
reconnection can be provided. However, the service delay
due to multihop is incurred, and it has a significant impact
on the performance especial in the delay-sensitive networks.
Moreover, a huge resource waste might be caused, because
the computing capacity of MTs is ignored. In fact, the tasks

978-1-4673-6187-3/13/$31.00 2013 IEEE

at the overloaded MT a (1, 2, 3, 4 and 5) could be separately


computed by the low loaded MTs b and c instead of cloud. MT
a can transfer its tasks to MT b with one hop or MT c with
two hops considering computing capacity of MTs, comparing
with 4 hops (cloud to Base Station to MT c to MT b to MT
a) in existing relay technique.
In summary, the advantages considering the computing
capacity of MTs when cloud is disconnected are listed as
follows. (1) the service delay could be declined. (2) the energy
could be saved. (3) the cost is very low. This observation
inspire us to this work to fully exploit the advantages of
computing capacity of MTs. To this end, we propose an energy
saving algorithm called Task Offloading using Self-organized
Criticality (TOSOC), which could adjust the appropriate critical threshold of load to satisfy the required service delay
constraint.
The outline of the rest of this paper is shown as follows.
Section II describes the user-provided platform and problem
formulation. The Task Offloading using Self-organized Criticality and the critical threshold design is described in Section
III. Section IV shows the experimental results. And finally,
Section V concludes the paper and discusses potential future
directions.
II. S YSTEM M ODEL AND P ROBLEM S TATEMENT
In this Section, we describe the system model based on our
proposed user-provided resources platform, and formulate the
energy saving problem, respectively.

TABLE I: Parameter Description Offloading


Symbol

Meaning

Tvl (t)/Tvu (t)

Time it takes when the task of v is executed on v / u

o
Tuv
(t)/Euv (t)

Time / energy it takes to transfer data from v to u

q
Tvq (t)/Tuv
(t)

Time it takes to the task of v queue on v / u

Evl (t)/Evu (t)

Energy it takes to the task of v is executed on v / u

Ei (t)

Energy consumed in idle state due to offloading

Sv (t)

The task total of v at the tth

Pi /Pl

Idle power / active power of processor

B. Problem Formulation
For clarity of presentation, the parameters used in the
following discussion are listed in the Table I. For MT v,
let wv (t) {0, 1, 2, 3, 4} subjects to uniformly distribution.
wv (t) = i, {0 i 4} denote the number of arriving requests
by MT v between the beginning of the tth execution and
the (t + 1)th execution. If an arriving request is executed
in MT v at the tth execution, the energy consumption is
Evl (t) = Pl Tvl (t). For transfer data from MT v to u, we have
requests is
wvu (t), which means that the number of arriving

offloaded from v to u at the tth execution. uV wvu (t) is the
total number of offload of v at the tth execution. Otherwise, the
offloading also causes storage cost. For simplicity, we ignore
this cost since the energy consumption of storage is much less
than that of processing. The energy consumed in idle state due
to offloading is defined as [5]
Ei (t) = pi

A. System Model

q
max{wvu (t)[Tvu (t) + Tuv
(t)]}

(1)

vV

Based on the (1), the energy consumption function and its


corresponding service delay can be defined as follows
E(t) = Ei +
(a) Framework

vV



(b) Offloading

Fig. 2: The user-provided resources platform


A framework of user-provided resources platform is illustrated in Fig.2 (a). The user-provided resources platform
consists of the platform manager, task manager, and task
sensing. The platform manager is responsible for the resource
allocation of MTs. The task manager is in charge of running
the tasking offloading. Task Sensing mainly detects the information such as computing capacity, bandwidth and storage.
In this user-provided resources platform, MT can directly
offloading some tasks to its neighbor for computing without
cloud. Clearly, we use V with |V | = N to represent the set
of MTs in the user-provided resources platform. Each vertex
v V denotes an MT and wvu represents the task offload from
vertex v to u. The task offloading relationship between two
MTs is shown in Fig.2 (b).

[wv (t)


uV

wvu (t)Evu (t) +

vV uV

and
T (t) =

wvu (t)Euv (t)

(2)

uV

{max[(wv (t)

vV

wvu (t)]Evl (t)+

wvu (t))(Tvl (t)+

uV

(3)

o
q
(t) + Tuv
(t))]}
Tvq (t)), wvu (t)(Tvu (t) + Tuv

There service delay T (t) must satisfy


T (t) Tresp

(4)

where Tresp is the service delay requirement.


T

Let E(t)
= limT T1 t=1 E(t) denote the average energy consumption. Our objective is to minimize the average
energy consumption, thus

min E(t)
w(t)

t = 0, 1, , T 1

(5)

allowed offload task to u in a avalanche. The details of the


TOSOC procedure are as follows.
Step1: Start the TOSOC by assigning random initial
values (tasks) and setting up the critical threshold Sz for
each MT v.
Step2: Update the tasks wv (t) at the beginning of the tth
execution for each MT v, then Sv (t) Sv (t) + wv (t).
Step3: If Sv (t) Sz , MT v offloads some tasks to its
adjacent MT v + 1 and v 1, then,

Sv (t) Sv (t) [Sv (t) Sz ]


z
Sv+1 (t) Sv+1 (t) + Sv (t)S

Sv (t)Sz

Sv1 (t) Sv1 (t) +
2

Fig. 3: The decision-offloading procedure of MT

subject to:
min T (t) Tresp

(6)

min

(7)

where is the average arriving request interval and min is


the minimum average service delay that can be achieved, thus
(7) ensures the system stability.
Since the user-provided resources platform is simply organized by self-motivated mobile user resources, the task
offloading of user-provided resources platform subjects to
Self-organized Criticality [10]. In order to solve the problem in
(5)-(7) effectively, we next present an offloading algorithm by
the Self-organized Criticality, which makes a tradeoff between
the energy consumption and service delay.
III. TASK O FFLOADING USING S ELF - ORGANIZED
C RITICALITY
In this Section, we describe Task Offloading using Selforganized Criticality (TOSOC), and design the critical threshold of TOSOC, respectively.
A. Description of TOSOC
For any MT, the decision-offloading procedure is shown in
Fig. 3. When a MT receives a new task, the task manager
should decide whether the MT is overloaded or not. If an
MT is overloaded (over the critical threshold Sz ), this MT
will offload some tasks to its nearby MTs. This operation is
called as offload. If the corresponding adjacent MTs are also
overloaded, the overloaded tasks should be offload to others
until no overloaded MT. This continuous offloading process is
called as avalanche.
Let us assume that the time of a task offloading be much
smaller than its computing time which could be regarded
o
(t)  Tuv (t)). Moreover, wv (t) is the number
as zero (Tuv
of
arriving
requests
for MT v at the tth execution, and

u
w
(t)
is
the
number
of tasks which are offloaded from
v
uV
v to u by Sz . Further, we assume that a task can be processed
o
(t)  Tuv (t), thus
within a time unit in each MT, since Tuv
the avalanche always begins in the tth execution and ends in
the (t + 1)th execution. In order to avoid ping pong effect,
we prescript that if v first offload some tasks to u, v will not

Step4: If Sv (t) < Sz for any v V (the avalanche has


finished), it will wait for a time unit tth (t+1)th , then
Sv (t) Sv (t) 1, go to step 2. Otherwise, go to step 3
(the offloading still continues).
Intuitively, the TOSOC increases energy consumption and
decreases application service delay as the critical threshold
Sz decreases. Thus, we can minimize the energy cost while
meeting the service delay requirement by designing an appropriate threshold Sz . The detail of the TOSOC formulation is
provided in the next subsection.
B. Critical Threshold Design
For simplicity, we assume that all arriving requests have the
same computational complexity and all MTs have the same
processing capacity, then Evl (t) = Evu (t) and Tvl (t) = Tvu (t).
Thus (2) can be simplified as follows


wv (t)Evl (t) +
wvu (t)Euv (t)D(wvu (t))
E(t) = Ei +
vV

uV

(8)
where D(wvu (t)) denotes the times of offloading for arriving
requests between the tth and the (t + 1)th execution.
Before further discussing the minimization of the average

energy consumption E(t),


we first present a Lemma in [10],

which is related to the derivation of the E(t).


Lemma 1: If D(k) is the distribution function of avalanche
size for Self-organized Criticality, then this distribution function D(k) can be derived as
D(k) k

( 3)

(9)

Based on Lemma 1 and [5], we derive two theorems, which


characterize the performance of the TOSOC. Specifically, Theorem 1 and Theorem 2 characterize the relationship between
the service delay and the average energy consumption.
Theorem 1: Assume the average arriving request interval >
T
1), then average
min , if the critical threshold Sz 2( T resp
u
v (t)
service delay T (t) of the TOSOC less than the requirement
service delay Tresp (i.e., T (t) Tresp ).
Proof: Since all arriving requests have the same computational complexity and all MTs have the same processing capacity, we have Tvq (t) = Sv (t) Tvl (t) and
q
(t) = Su (t) Tvu (t). Moreover, from Lemma 1, we can
Tuv


obtain limT T1 uV wvu (t) = E[D(k)] 1. Adding
= 2. Furthermore, for the average
limT T1 wv (t) = 40
2
arriving request interval > min , the TOSOC is stable.
Therefore the length of average task queue is less than S2z
[10], we can obtain

1
(wv (t)
wvu (t))(Tvl (t) + Tvq (t))
lim
T T

IV. S IMULATION R ESULTS

uV

(2

1)(Tvl (t)

Sz l
T (t))
+
2 v

(10)

Sz
+ 1)Tvl (t)
=(
2
o
Since Tuv
(t)  Tuv (t), then we obtain
1 u
o
q
lim
wv (t)(Tvu (t) + Tuv
(t) + Tuv
(t))
T T
1 u
q
(11)
lim
wv (t)(Tvu (t) + Tuv
(t))
T T
Sz
+ 1)Tvu (t)
(
2
Based on (3), we obtain
Sz
(12)
+ 1)Tvu (t)
T (t) (
2
Moreover (12) must satisfy (6), we can derive
Tresp
1)
Sz 2( u
(13)
Tv (t)
Thus, if (13) can be satisfied, T (t) Tresp .
Theorem 2: Assume Sc > 0 and 0 < Sz Sc , then
limT T1 D[wvu (t)] | V |1/Sz , the average arriving request
interval > min , we have the following inquality
2 Evl (t)+ | V |1/Sz Euv (t)
E(t)

(14)

where | V | denotes the total number of MTs.


Proof: The SOCTO is stable for the average arriving
request interval > min . For MT v, since the number of the
offloaded tasks is Sv (t) Sz when Sv (t) Sz , the number
of tasks of v is the critical value at the tth , therefore it has no
energy consumption in idle state. Thus, (8) can be rewritten
as


E(t) =
wv (t)Evl (t) +
wvu (t)Euv (t)D(wvu (t)) (15)
vV

uV

The average energy consumption can be written as


= lim 1 E(t)
(16)
E(t)
T T
D[wvu (t)] | V |1/Sz , limT T1 wv (t) = 2
Since limT T1
1
and limT T uV wvu (t) 1. Thus, when T in
(16), we obtain inequality (14). This concludes the proof.
Note that (12) provides an upper bound for the average
service delay. When Sz increases, the upper bound also
increases. This may result in a longer waiting time of a request
before execution. However, according to (14), the average
energy consumption will decrease when Sz increases. Hence,
the TOSOC enables MTs to save energy and ensures that the
application service delay satisfies the given time constraint by
(13).

Fig. 4: Energy consumption and service delay comparison


In this section, we simulate the performance of the userprovided resources platform in terms of average energy consumption and service delay. The experiment is based on a
500m 500m square area. The other parameters are set as
follows: Pi = 30mW, Pl = 320mW, and Puv = 40mW [5].
The number of execution is set to T = 3000.
To gain insight on the performance of the TOSOC, we
consider the network scenario that a cluster of MTs in the
signal blind zone. Comparing the performance of the proposed
algorithm in Sz = i (i = 4, 5, 6, 7), Fig.4 shows the energy
consumption and service delay of our TOSOC scheme with
100 MTs. When Sz increases, the service delay also increases.
On the other hand, the average energy consumption will
decrease when Sz increases. The dotted line denotes the time
constraint (Tresp = 100ms). It can be observed that our
algorithm can help to save more energy by setting a proper
critical threshold Sz . This is mainly because our proposed
scheme offloading the tasks of an MT to its adjacent MTs
dynamically according to the number of arriving requests at
the tth and Sz . As shown in Fig.4, when Sz = 6, our proposed
scheme not only finishes tasks in the required service delay
but also saves more energy of MTs.
To further analyze the performance of our scheme, we
compares the service delay and energy consumption of tasks
processing among our platform, remote cloud platform with
relay node [6] and without mechanism under different network rate, where without mechanism represents the tasks be
executed by MT itself without offloading any tasks.
The relationship between the service delay and the network
rate is shown in Fig.5. It can be observed that when the
network rate is low, the proposed platform can reduce the
service delay significantly. This is due to the fact that the
proposed platform can help MT find and access an adjacent
computing resource when its network rate is low. Since relay
nodes incur multi-hop scheduling, delay jitter and the higher
probability of package loss, the remote cloud platform has high
service delay. When the network rate is high, the traditional
cloud platform has a lower service delay than our platform has.
This is because computing speed of the remote cloud platform
is much higher than that of our platform. As the network rate
is growing, the service delay of without mechanism is almost
constant at 300ms. The reason is that the tasks of mobile users
are executed by MT themselves without offloading any task.
In this figure, we also can see that the service delay of the
proposed platform is almost constant as the network rate is

Fig. 5: Service delay comparison

Fig. 6: Average energy consumption comparison

growing. This is mainly due to the proposed platform makes


a tradeoff between the energy consumption and the service
delay.
The comparison of average energy consumption is shown
in Fig.6. The average energy consumption of remote cloud
platform is higher than our platform when the network rate is
very low. The probable explanation is that the remote cloud
platform using relay node has higher service interruptions in
a very low network rate. As the network rate is growing,
the average energy consumption of without mechanism is
almost a constant at 180mJ. Massive numbers of tasks easily
lead to MTs crashing without offloading any task, thus its
energy consumption is higher than the others. The average
energy consumption of remote cloud platform have the same
characteristics as its service delay. The reason is that its
average energy consumption decreases as service interruptions
decreases while the network rate increasing. It is interesting
to note that the average energy consumption of our platform
decreases slowly since the network rate increases. This is
due to the fact that the proposed platform allow some MTs
offloading their tasks to the remote cloud platform when they
are supported by remote cloud platform without relay. From
what has been discussed above, we can draw the conclusion
that the proposed platform can serve as a supplement scheme
to MCC when the network rate is low.

no guarantee that a particular mobile users local resources will


be always online for user-provided resources platform. Thus,
we have to ensure high service availability while integrating
users resources. (2) Mobile User Incentive. It remains unclear
whether all of mobile users are willing to contribute their own
idle computing resources for free. Therefore, the design of a
better incentive model is still necessary for our user-provided
platform. (3) Business Model. Although the user-provided
resources platform can serve as a complement to remote cloud
platform, it remains unclear who is in charge of running the
tasking offloading and the proposed platform. Therefore, the
design of a better business model is still necessary for our
user-provided resources platform.

V. C ONCLUSION
For the area of dead spots or coverage holes, MTs can
just acquire poor service or even no service from MCC.
To order to improve the QoS of MCC, we propose a userprovided resources platform, which allows MTs to contribute
their own idle computing resources. Moreover, we formulated
a mathematical model to reduce the service delay and extend
the battery lifetime of MTs by offloading tasks to its nearby
MTs. Based on the Self-organized Criticality, a task offloading
algorithm is developed for the user-provided resources platform to save energy and meet the service delay requirement.
However, this study has only examined the optimal policy
of energy consumption for our algorithm. There are still
three open issues that can be further explored: (1) Service
Availability. Different from traditional cloud platform, there is

R EFERENCES
[1] Niyato, D., et al. Game theoretic modeling of cooperation among service
providers in mobile cloud computing environments. in Wireless Communications and Networking Conference (WCNC), 2012 IEEE. 2012.
[2] M. Satyanarayanan, P. Bahl, R. Caceres, and N. Davies, The Case for
VM-Based Cloudlets in Mobile Computing, IEEE Pervasive Computing,
vol. 8, no. 4, pp. 14-23, October 2009.
[3] Deb, S., V. Mhatre and V. Ramaiyan. WiMAX relay networks: opportunistic scheduling to exploit multiuser diversity and frequency selectivity.
in Proceedings of the 14th ACM international conference on Mobile
computing and networking. 2008. San Francisco, California, USA: ACM.
[4] O. Oyman. OFDMA2A: A centralized resource allocation policy for
cellular multi-hop networks. In IEEE Asilomar Conference on Signals,
Systems and Computers, Nov 2006.
[5] Dong, H., W. Ping and D. Niyato, A Dynamic Offloading Algorithm
for Mobile Computing. Wireless Communications, IEEE Transactions on,
2012. 11(6): p. 1991-1995.
[6] Hyunseok, C., et al. Scheduling in mapreduce-like systems for fast
completion time. in INFOCOM, 2011 Proceedings IEEE. 2011.
[7] Maguluri, S.T., R. Srikant and Y. Lei. Stochastic models of load balancing and scheduling in cloud computing clusters. in INFOCOM, 2012
Proceedings IEEE. 2012.
[8] Benslimane, A., T. Taleb and R. Sivaraj, Dynamic clustering-based adaptive mobile gateway management in integrated VANET 3G heterogeneous
wireless networks. IEEE Journal on Selected Areas in Communications,
2011. 29(3): p. 559-570.
[9] Haiyang, W., et al. Measurement and utilization of customer-provided
resources for cloud computing. in INFOCOM, 2012 Proceedings IEEE.
2012.
[10] Bak, P., C. Tang and K. Wiesenfeld, Self-organized criticality: An
explanation of the 1/f noise. Physical Review Letters, 1987. 59(4): p.
381-384.

You might also like