Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

A new task scheduling algorithm based on value and time for cloud platform

Ling Kuang, and Lichen Zhang

Citation: AIP Conference Proceedings 1864, 020017 (2017); doi: 10.1063/1.4992834


View online: https://doi.org/10.1063/1.4992834
View Table of Contents: http://aip.scitation.org/toc/apc/1864/1
Published by the American Institute of Physics

Articles you may be interested in


Cloud computing task scheduling strategy based on improved differential evolution algorithm
AIP Conference Proceedings 1834, 040038 (2017); 10.1063/1.4981634

Task scheduling based on ant colony optimization in cloud environment


AIP Conference Proceedings 1834, 040039 (2017); 10.1063/1.4981635

A scheduling algorithm based on Clara clustering


AIP Conference Proceedings 1864, 020016 (2017); 10.1063/1.4992833
A New Task Scheduling Algorithm based on Value and Time
for Cloud Platform
Ling Kuang a) and Lichen Zhang b)

Department of Computer Science, Guangdong University of Technology, Guangzhou 510006, China


a)
Corresponding author:singmi@126.com
b)
lchzhang@gdut.edu.cn

Abstract. Tasks scheduling, a key part of increasing resource utilization and enhancing system performance, is a never
outdated problem especially in cloud platforms. Based on the value density algorithm of the real-time task scheduling
system and the character of the distributed system, the paper present a new task scheduling algorithm by further studying
the cloud technology and the real-time system: Least Level Value Density First (LLVDF). The algorithm not only
introduces some attributes of time and value for tasks, it also can describe weighting relationships between these properties
mathematically. As this feature of the algorithm, it can gain some advantages to distinguish between different tasks more
dynamically and more reasonably. When the scheme was used in the priority calculation of the dynamic task scheduling
on cloud platform, relying on its advantage, it can schedule and distinguish tasks with large amounts and many kinds more
efficiently. The paper designs some experiments, some distributed server simulation models based on M/M/C model of
queuing theory and negative arrivals, to compare the algorithm against traditional algorithm to observe and show its
characters and advantages.

Key words: Tasks Scheduling, Task Priority Algorithm, Cloud Platform.

INTRODUCTION
The cloud technology was born and integrating into our lives step by step with the development of distributed
system, virtualization technology and network technique. However, in the age of big data, the cloud platform has
grown by leaps and bounds, but the cloud computing is still in the development phase. As [1] shown, a great variety of
problems and challenges are still waiting to be discovered, thought and resolved in this field. Among these problems,
scheduling is a crucial role which can decide the efficiency of tasks execution.
Tasks scheduling, in short, is a series of strategies and protocols which can be used to control the
operation execution sequence in the computer system. In distributed systems, the scheduling need to further concern
about the sending orders of tasks between different service nodes. Therefore, an effective and efficient
scheduling management method will get huge optimization to the using rate of CPU, inner and outer memory.
In this day and age, a wide range of scheduling algorithm were invented and used in the distributed tasks
scheduling problem, as mentioned in [2]. Except old mature algorithms, some new algorithms are constantly emerging
in recent years. Some of these instructive, innovative and reasonable idea are represented in [3] - [6]. One of the major
tasks scheduling type is that it can calculate the priority for each task statically or dynamically and control the other
of tasks executing and sending through the priority. Indeed, the task priority strategy has an outstanding ability to
manage the task scheduling based on the specific mathematic model and the weight of different task attributes
in the practical application scene, so these strategies cannot only use calculation and storage resources more efficiency,
but also optimize the throughput of the whole system.
In conclusion, it is an irresistible trend and a feasible way that researchers try to find some scheduling algorithm
which can be used more appropriately in the cloud computing environment from the task priority algorithm study in
distributed system. In the course of the tasks scheduling study, a new algorithm is found and presented in the article

Green Energy and Sustainable Development I


AIP Conf. Proc. 1864, 020017-1–020017-7; doi: 10.1063/1.4992834
Published by AIP Publishing. 978-0-7354-1542-3/$30.00

020017-1
and it can be used to dynamically calculate the priority of each task in the cloud platform. The algorithm derives from
the value density algorithm in the real-time distributed task system and it can calculate the priority of each task with
the arrival time, the dwelling time, the relative value and the value level and use the priority to aim the optimization
of tasks scheduling and the addition of task throughout. The remainder of the paper is organized as follows. Section 2
mainly presents the detail of the Least Level Value Density First. In section 3, the article designs a distributed
simulation model based on the M/M/C(C>1) queue model with a time-dependent arrival rate λ (t) for the verification
of the Level Value Density. In section 5, the article represents characters and advantages of the Level Value Density
through comparing the algorithm against some traditional algorithms in the simulation model. Finally, the conclusions
are given in section 6.

THE LEAST LEVEL VALUE DENSITY FIRST (LLVDF)


In this section, some details of Least Level Value Density First will be presented and discussed. And then, some
concepts of the server node repair will be presented to show the probability and the advantage of the new algorithm
and to prepare for the discussion in the experimental section.
In the practical cloud application environment, the task execution efficiency has gained more attention than before.
The cloud platform requires that each task must try to execute as quickly as possible. In other words, the task
completion time as close to the estimated execution time as possible. And the requirement will bring some new
problems and challenges to researcher in the scheduling study. However, these problems
are being studied for long time in the real-time distributed system study as shown in [7]. In the real-time database
system, the higher efficiency and the better timeliness protection of tasks executing are implemented by the setting of
deadline, the pre-emption among different tasks, the task aborting strategy, the task priority calculation and so on.
In contract, many tasks have not the deadline, and even cannot be aborted in the cloud computing environment.
Therefore, the article presents a new value density algorithm based on the Value Density in the real-time distributed
system to gains some advantages of timeliness, although this new algorithm remove the deadline which is the key in
the real-time task to adopt to the practical cloud application
Environment. The article assumes a task set T= [ܶଵ |ܶଶ |…|ܶ௠ ]. To the task ܶ௜ in the set, the new algorithm can be
simply expressed as below.

ாா೅೔ ଵ
்ܴܲ೔ ሺ‫ݐ‬ሻ ൌ  ‫כ‬ ൅ ‫ܸܮ‬ (1)
௏ௌቀ்௏೅೔ ǡாா೅೔ ǡ்௏೘ೌೣ ቁ ሺாா೅೔ ା௙‫כ‬ሺ௧ି஺்೅೔ ሻሻ

Where t represents the current time, ்ܴܲ೔ ሺ‫ݐ‬ሻ is the priority of ܶ௜ at time of t. In this algorithm, to the task ܶ௔ and
the taskܶ௕ ǡ ܶ௔ ,ܶ௕ ‫ ܶ א‬and (a് ܾ), at time t, if ܴܲ௔ ሺ‫ݐ‬ሻ>ܴܲ௕ ሺ‫ݐ‬ሻ, then the task ܶ௕ will be executed before the taskܶ௔ .
In other words, the algorithm is a scheduling strategy type of the Least Level Value Density First. As the formula (1)
shown, the algorithm consists of two main components.
‫்ܸܮ‬೔ Represents the value level of ܶ௜ and it has been set based on actual requirements or certain conditions when
ܶ௜ was created. ‫ ܸܮ‬is a nonnegative integer and it represents the value level where the task ܶ௜ is positioned. Its value
is unchangeable after when the value was decided. ‫ ܸܮ‬Has some regulation can be expressed as below. To the task ܶ௔
and the taskܶ௕ ǡ ܶ௔ ,ܶ௕ ‫ ܶ א‬and (a് ܾ), at any moment, if‫ܸܮ‬௔ ൐ ‫ܸܮ‬௕ , then the task ܶ௕ will be executed before the taskܶ௔ .
‫்ܸܮ‬೔ Can be used to divide the task set T into some level subsets based on the above characters and to distinguish the
execution sequence among different subsets.
Where ܸܵ൫்ܸܶ೔ ǡ ‫்ܧܧ‬೔ ǡ ܸܶ௠௔௫ ൯ is a value transformation function and it can dynamically computes a factor of time
and value by ்ܸܶ೔ ǡ ‫்ܧܧ‬೔ andܸܶ௠௔௫ . ்ܸܶ೔ Is the task value of ܶ௜ and it means the value among different tasks in a
same level. ்ܸܶ೔ is a positive integer. It is assigned based on certain protocols when the task was created or changed
based on certain regulations at run time. ܸܶ௠௔௫ is the max task relative value in the system. Apparently, the
relationship between the value of ்ܸܶ೔ and the sequence among tasks is exactly the opposite of‫்ܸܮ‬೔ . To the task ܶ௔
and the taskܶ௕ ǡ ܶ௔ ,ܶ௕ ‫ ܶ א‬and (a് ܾ), at time t, if ‫ܧܧ‬ୟ ሺ‫ݐ‬ሻ ൅ ݂ ‫ כ‬ሺͳ െ ‫ ೌ்ܶܣ‬ሻ ൌ ‫ܧܧ‬ୠ ሺ‫ݐ‬ሻ ൅ ݂ ‫ כ‬ሺͳ െ ‫ ್்ܶܣ‬ሻ andܸܶୟ ൐
ܸܶୠ , then the task ܶ௔ will be executed before the task ܶ௕ andܸܵሺܸܶ௔ ǡ ‫ܧܧ‬ୠ ǡ ܸܶ௠௔௫ ሻ ൐ ܸܵሺܸܶୟ ǡ ‫ܧܧ‬ୠ ǡ ܸܶ௠௔௫ ሻ.
‫்ܧܧ‬೔ , the estimated execution time ofܶ௜ , is calculated by the system when the task ܶ௜ was created.
‫்ܶܣ‬೔ , the arrival time ofܶ௜ , is a time of ܶ௜ arriving in a system or a server node. Its value is decided based on the
situation. In the experimental section of this article, the value is set as the time of a task arriving in the whole cloud

020017-2
system. In addition, f is a nonnegative real number and a reserved regulatory factor in the formula (1). And it can be
used to adjust the share of the estimate time and the whole dwelling time.

As the formula (1) shown, if ‫ ݐ‬՜ λ , then Ž‹ ܴܲ ்೔ ሺ‫ݐ‬ሻ ൌ ‫ ܸܮ‬, and if ‫ ݐ‬ൌ Ͳ , thenŽ‹ ்ܴܲ೔ ሺ‫ݐ‬ሻ ൌ
୲՜ஶ ୲՜଴

௏ௌሺ்௏೅೔ ǡாா೅೔ ǡ்௏೘ೌೣ ሻ
. And because ‫ ܸܮ‬has characters of the nonnegative integer as seen above, so the Least Level Value Density First
algorithm can keep ܴܲ ்೔ ሺ‫ݐ‬ሻ of each task in an interval value (‫்ܸܮ‬೔ ,‫்ܸܮ‬೔శభ ]. And this enables the division subsets in
the task set T depending on the value of‫்ܸܮ‬೔ . On the other hand, if the regulatory factor f՜ λ, then the new algorithm
is closer the First Come and Highest Value First algorithm, and if f=0, then it can be regarded as a combination of the
value level parameter and the Highest Value First algorithm. Therefore, for maximum effect, the regulatory factor f
can be adjusted based on the practical application scene when the new algorithm was used.
In the following, the characteristics and advantages of this algorithm will be observed experimentally, but before
that, the simulation system for experimentation is described in the next section.

EXPERIMENTAL ENVIRONMENT
A simulative cloud system is created for the verification of the Least Level Value Density First algorithm based
on the M/M/C queuing model of the queuing theory. The queuing theory, a mathematical tool for handling queuing
problems, allows the researcher to study the actual system more conveniently from the mathematical standpoint. A
detailed description can be found in the article [8].
In the field of computer systems research, many scholars have found that queuing theory can be applied to the
mathematical modeling of computer systems, so that the actual computer system can be modeled based on queuing
theory [9].Researchers can model abstracts for problems that need to be studied through these models to simplify the
complexity of such studies and reduce research costs. It is one of the main topics in the computer field that the queuing
theory and simulation tools are used to model the computer system.
As a result, the cloud technology evolved from distributed system technology can also be modeled by queuing
theory. Indeed, the actual cloud system is more complex than a typical computer system, and this complexity can grow
geometrically as a result of evolving of the cloud platform. Therefore, it is feasible and necessary to simplify the study
of the difficulty of cloud system through modeling the cloud system based on the queuing theory. [10] - [14] and other
articles describe some methods which are used to implement effective modeling of the cloud system by the queuing
theory.
For this reason, the cloud system is modeled using the Sim Events tool based on the queuing theory mathematic
modeling tool proposed in [9] - [14]. The method of building a distributed computer system using Sim Events can be
found in [15], and its details are not discussed in the article. And then, this paper introduces the Least Level Value
Density First algorithm into a cloud system queuing model based on M/M/C to observe the characteristics of this
algorithm and find its advantages.
According to the article [14] and practical experiences, the arrival rate λ of a cloud system is time-dependent, so
we set up a time-dependent arrival rate λ (t) to simulate the actual user task arrival. In order to simulate the fault and
repair scenario of the cloud platform, based on the article [9], the article introduces a negative arrival task type with
the arrival rate θ obeying the general random distribution.
According to the article [14] and practical experiences, the repair time of the critical part is 5 minutes and the
repair time of the remaining noncritical part is subject to the general stochastic distribution, while the delay effect of
the execution time of other tasks due to the repair situation of the remaining noncritical part also follows the general
stochastic distribution. Therefore, this paper introduces a 5-minute repair critical task type and a noncritical task type.
The repair time of the noncritical task type is variable, and the total number of tasks generated per time bases on the
type is subject to general random distribution. And if the remainder has not been repaired, other tasks on the compute
node will be affected by the general random distribution delay.In addition, the experiment introduces a task execution
delay to each normal task to simulate the situation when the system implements a partial recovery strategy. In other
words, the actual execution time of normal tasks will be longer than usual when the system needs to be partially
restored.
According to the article [11], a cloud computing system can be regarded as a combined queuing model of M/M/C
and M/M/C or M/M/1, as shown in Figure 1. The queuing system can be regarded as a M/M/C queuing system in two
different cases as shown in in Figure 1. When the task reaches the computing node, the system can be regarded as an
M/M/C system or M/M/1 system according to the characteristics of the actual server. Therefore, the article builds a
cloud system model based on queuing theory as the experimental environment of this paper by combining the model

020017-3
of "WEB to the main nodes", the model of "main nodes to the computing nodes" and the model of the task process
inside computing nodes. In the next section, the article will discuss the experimental results of the Least Level Value
Density First algorithm in the environment.

FIGURE 1. Simulation environment based on m / m / c

EXPERIMENTAL RESULTS AND ANALYSIS


As can be seen from the above sections, the Least Level Value Density First algorithm proposed in the article can
obtain the ability of real-time scheduling algorithm to describe the mathematical relationship between time and task
value by absorbing the merits of value density algorithm, thereby obtaining the ability of distinguishing among the
tasks by time and value and the ability of calculating the execution priority or the transmission priority based on the
difference. In this section, the will discuss the results of the experiment on the Least Level Value Density First
algorithm proposed by the above-mentioned cloud computing simulation system.

020017-4
In the following experiments, a task generator using the arrival rate λ (t), which depends on the simulation time t,
is introduced into the M/M/C cloud simulation model. And then, FCFS, SJF and the Least Level Value Density First
algorithm proposed in the article are respectively added to the simulation model for running experiment and
comparison to observe the advantages of the Least Level Value Density First algorithm compared with the other two
traditional algorithms.
As shown in the Figure 2, the average per-minute queue-length of the Least Level Value Density First is lower
than FCFS and its growth rate is close to SJF. In addition, when a negatively arriving task arrives at a system that does
not cause a system to crash but cause a partial repair of the system, the total time span of the Least Level Value Density
First for tasks that are used to partially repair the system is far less than the SJF. Since the estimated execution time
for noncritical repair tasks is generally greater than for normal tasks, the SJF is likely to perform a portion of the
noncritical repair task, and then postpones most of the remaining non-critical repair tasks until the system has been
running for 300 minutes, making the average wait queue for tasks significantly longer.

.
FIGURE 2. The average number of tasks waiting in the system

020017-5

FIGURE 3. Average task waiting time

It can also be seen in Figure 2 that a new negative arrival task arrives at the SJF when the SJF begins processing
the rest of the repair task, so the SJF system takes more time to repair the child nodes that need to be repaired. In this
repair process, the remaining non-critical repair task is likely to affect the actual execution time of other tasks which
led to the situation in Figure 2 and Figure 3.
A system that uses a hierarchical value-density algorithm, although has a long
Wait queue when the first negative arrival task arrives, but it can trade off the latency, execution time, and value
of all tasks to complete remaining repair tasks more quickly. This not only can maintain the appropriate task timeliness
in the normal operation of the system, but also can quickly deal with the problem in the system encountered the
negative arrival. The advantage of the Least Level Value Density First can also be seen in Figure 3.
It can be seen clearly from Figure 2 and Figure 3 that the difference in the method of processing the repair task
will result in the difference of the task average waiting queue. Therefore, compared with traditional task scheduling
algorithms, which partly depend on time attributes, the Least Level Value Density First in the article has a natural and
significant advantage in some systems that use the partial restoration strategy.

CONCLUSION AND FUTURE WORK


The Least Level Value Density First algorithm proposed in this paper obtains the ability of describing the
relationship between the value and time of different tasks by introducing the real-time task value density algorithm of
real-time task system. The features and advantages of the ability in task scheduling are also illustrated in Figures 4
and 5. These characteristics and advantages make the Least Level Value Density First algorithm in the article which
not only can introduce the timeliness and value classification for processing of general tasks, but also can more
reasonably schedule different tasks when the system encounters unexpected situations. In the actual operation of a
system, the system often encounter a variety of emergency situations, that these emergencies likely bring unpredictable

020017-6
impact to the execution of tasks and even the operation of the whole system, so the Least Level Value Density First
algorithm has its practical significance.
As described in [6], future task scheduling algorithms will likely require that the system feeds back some time-
based and valuable results to users based on requirements and subscription categories. And this is one of the future
development trend of task scheduling. Therefore, it is a good research direction to study the ability of real-time task
to describe the relationship between value and time for task. For the algorithm of the paper, it is will be the next
interesting research direction that the system can dynamically adjust the weight ratio of the execution time and the
task dwelling time in the algorithm by obtaining the different state of the system and using the reserved adjustment
factor. In the future, we will try to introduce the ability of obtaining the state information such as the load of the system
for the algorithm, so as to find some priority computing algorithms which are more suitable for cloud platform
transaction priority calculation.

ACKNOWLEDGMENTS
This work is supported by the national natural science foundation of China under grant (No.61572142, 61370082),
natural science foundation of Guangdong province under grant (No.2015A030313490).

REFERENCES

1. CyberϋPhysical Systems: Draft Situation Analysis of Current Trends, Technologies, and Challenges[R]. NIST
Foundations for Innovation for CyberϋPhysical Systems Workshop, June 18, 2012.
2. Yogita Chawla, Mansi Bhonsle. A Study on Scheduling Methods in Cloud Computing [J]. International Journal
of Emerging Trends & Technology in Computer Science (IJETTCS), Volume 1, Issue 3 September-October
2012. ISSN 2278-6856
3. Chunyao Liu, Lichen Zhang, Daqiang Zhang. Task Scheduling in Cyber-Physical Systems[C]. Ubiquitous
Intelligence and Computing, 2014 IEEE 11th Intl Conf on and IEEE 11th Intl Conf on and Autonomic and
Trusted Computing, and IEEE 14th Intl Conf on Scalable Computing and Communications and Its Associated
Workshops (UTC-ATC-ScalCom). 9-12 December 2014.
4. CUI Yunfei, WU Xiaojin, DAI Ye, CHENG Xiao, GUO Gang. Adaptive Fault-tolerant Scheduling Algorithm
for Unresponsive Task Based on Speculation [J].Computer Science,2016,43(11A):11-15
5. Dr. Amit Agarwal, Saloni Jain. Efficient Optimal Algorithm of Task Scheduling in Cloud Computing
Environment [J]. Computer Science, 2014, 9(7).
6. AV.Karthick, Dr.E.Ramaraj, R.Gnapathy Subramanian. An Efficient Multi Queue Job Scheduling for Cloud
Computing[C]. World Congress on Computing & Communication Technologies,2014:164-166
7. Liu Yunsheng. Real-time Database System (Chinese Edition) [M], page 153-154. Science Press, June 1, 2012.
8. L. Breuer, D. Baum "An Introduction to Queueing Theory"[M], Springer Verlag, 2005.
9. Ioannis Dimitriou. A mixed priority retrial queue with negative arrivals, unreliable server and multiple vacations
[J]. Applied Mathematical Modelling, 2013, 37(3): 1295 -1309
10. Yijun Zhu, Zhe George Zhang. M/GI/1 queues with services of both positive and negative customers [J]. Journal
of Applied Probability,2004, 41(4):1157-1170
11. M Eisa, E I. Esedimy, M Z. Rashad. Enhancing Cloud Computing Scheduling based on Queuing Models [J].
International Journal of Computer Applications, 2014, 85(2):17-23.
12. .Hiroshi Toyoizumi. Performance Evaluation of Quantum Merging: Negative Queue Length [OL].
http://www.f.waseda.jp/toyoizumi/research/papers/Performance%20Evaluation%20of%20Quantum%20Mergi
ng%20Negative.pdf
13. A. Aissani. An M/G/1 Retrial Queue with Negative Arrivals and Unreliable Server[R].Lecture Notes in
Engineering & Computer Science,2010,2183(1)
14. B Bouterse, H Perros. Scheduling Cloud Capacity for Time-Varying Customer Demand[C].
IEEE International Conference on Cloud Networking, 2012, 90(1):137-142.
15. SUN Xiaofeng, WANG Zhongjie. Res earch of Network Simulation Bas ed on MATLAB/SimEvents
[J].Computer Knowledge and Technology (Academic Exchange), 2007, 4(23):1254-1257.

020017-7

You might also like