Professional Documents
Culture Documents
al-rayis2013
al-rayis2013
Abstract—The Cloud computing is a rapidly emerging In general terms, the load balancing is a technique to
distributed system paradigm that offers a huge amount of IT distribute the workload of the system across multiple servers
resources as utility services at a reduced cost and flexible to ensure that no server is busy handling a heavy workload
schemes. The key of such flexibility is an efficient load balancer while another server is idle. Therefore, a load balancer can
that offers better management and utilization of virtualized be considered as a reverse proxy that distributes network or
underlying cloud infrastructures. However, most of the application traffic across a number of servers. Load
existing load balancers in cloud computing are based on either balancers promote achieving three main objectives. First,
centralized or fully distributed architectures while the idea of
improving overall system performance by reaching high
harnessing multiple load balancers in a hierarchical structure
resource utilization ratio. Second, avoiding system
to improve the sever load and job response time is still under
studied. Therefore, this paper, aims at bridging this gap by
bottleneck that occurs due to load imbalance. Finally,
providing a comparative study between the three load achieving high providers’ and users’ satisfaction by striving
balancing architectures in cloud computing: centralized, to increase the system throughput and decrease the job
decentralized and hierarchical load balancers. The processing time [2].
experimental results suggest that the hierarchical architecture Basically, load balancers can be deployed based on three
for load balancers best suits the public cloud environment and different architectures: The centralized load balancing
call for further research to test whether these results can be architecture which includes a central load balancer to make
generalized for other types of clouds. the decision for the entire system regarding which cloud
resource should take what workload and based on which
Keyword s- component; cloud computing; load balancing;
algorithm(s) [3]. This architecture has the known advantage
simulation
of robust management scheme but suffers from poor
scalability and constitutes a single point of failure.
I. INTRODUCTION The decentralized load balancing architecture has no
The boom in cloud computing has been rather central load balancer to distribute workload among available
exceptional; in just five years cloud computing has reshaped resources; instead, job requests are divided on arrival,
the way ICT service are provided and consumed. There are equally among many load balancers where each of them may
as many definitions, classifications and technologies for run a different algorithm to allocate jobs to resources. This
cloud computing as the number of institutions adopting it, architecture offers great flexibility and scalability. On the
and this number is on the rise. Cloud computing is defined other hand, it yields poor load balance among underlying
by the US Government’s National Institute of Standards and resources [8].
Technology (NIST) [1] as an ICT sourcing and delivery In the hierarchical load balancing architecture, a main load
model for enabling convenient, on-demand network access balancer (parent) receives all job requests, and then it spreads
to a shared pool of configurable computing resources (e.g. them to other connected load balancers (children) where each
networks, servers, storage, applications and services) that can load balancer in the tree may use a different algorithm.
be rapidly provisioned and released with minimal Although this architecture combines the advantages of both
management efforts and service provider interactions. previous schemes, centralized and decentralized, while
Virtualization technologies are the key enabler of cloud overcoming their disadvantages, hierarchical load balancers
computing by giving providers a flexible way of managing are more difficult to implement and include additional
their resources. Virtual infrastructure (VI) management is a overheads to organize between the load balancers themselves
key concern when building cloud environments and poses a The special characteristics of cloud environments that
number of challenges. Among the main challenges in VI result from the complexities of cloud computing virtual
infrastructure require advanced load balancing solutions that
management is to implement an efficient load balancer
are capable of dynamically adapting the cloud platform
capable of evenly distribute the workload among available while providing continuous service and performance
cloud resources. guarantees [9]. These difficulties have created different
views on which load balancing architecture would better
suites cloud computing. Therefore, the goal of this paper was
The main objective of this paper was to answer the Scenario Load Balancer Algorithm
question: in identical cloud environments, which load
487
521
Decentralized load_balancer-1 Opportunistic IV. RESULTS AND DISCUSSION
The simulation run to represent one hour of real life
load_balancer_2 Round Robin
(3600 sec.). The results of each performance measure are
Centralized load_balancer Round Robin
presented in the following subsections
load_balancer_C1 Opportunistic
load_balancer_C2 Opportunistic
A. Response Time
Fig. 4 shows the results of the simulation in term of
response time. The horizontal axis represents the number of
clients and the vertical axis represents the response time in
seconds. In the three architectures, the response time
Figure 1. Decentralized load balancer Architecture increased dramatically with the increase in the number of
clients. Both decentralized and centralized load balancers,
have almost the same response time, while the hierarchical
load balancer outperformed them giving notably less
response time.
488
522
for load balancers in cloud computing. In this architecture, V. CONCLUSION
all servers were considerably lighter in load than in the two Cloud computing is a relatively new IT paradigm that
other scenarios. This can be attributed to the ability of this offers huge amount of resources at reasonable cost. The
architecture to split the load balancing overhead among special characteristics of cloud environments and the
many load balancers running various algorithms that can dynamic nature of its virtual infrastructure call for efficient
supplement each other while maintaining some nature of load balancing solutions that are capable of maintaining low
centralized management. values for response time and server loads [9]. Among the
For decentralized and centralized load balancers, their critical factors the affects the performance of a load balancer
servers show approximately similar load as the number of is its architecture which can be decentralized, centralized or
nodes increased. hierarchical. This paper carried out a comparative study
between the three architectures and how they affect the
cloud performance.
A simulated model for a public cloud has been built for
this purpose at different scales and the system performance
was measured under the three possible load balancing
architectures. The experimental results illustrated the
dominant performance of the hierarchical architecture for load
balancers due to its ability to split the load balancing
overhead among many load balancers running various
algorithms that can supplement each other while maintaining
some nature of centralized management over the cloud.
For future work, we are interested in testing if these
results cab generalized to other type of clouds, i.e. private
Figure 5. load of each server in each load balancing architecture when
and hybrid clouds, as well as other cloud deployment models
the number of clients was 10 such as IaaS and PaaS. We are also interested in applying
more advanced load balancing algorithms e.g. honeybee and
ant colony and see how they affect the load distribution
among cloud servers.
ACKNOWLEDGMENT
This work was funded by the Long-Term Comprehensive
National Plan for Science, Technology and Innovation of the
Kingdom of Saudi Arabia, grant number 11- INF1895-08.
REFERENCES
489
523
[7] H. kurdi, H. S. Al-Raweshidy and M. Li “Design and Evaluation of A Cloud Platforms, IEEE Transactions On Dependable And Secure
Personal Mobile Grid” in Proc. of the International Conference on Advanced Computing, pp. 253-272, Vol. 10, No. 5, 2013.
IT, engineering and Management AIM 2012 , Jul. 2012, Jeju, Korea. E. M. [9] N. Kansal, I. Chana, "Existing Load Balancing Techniques In Cloud
Gengbin Zheng, Abhinav Bhatele and L. V. Kale, "Periodic Computing: A Systematic Re-View", Journal of Information system
Hierarchical Load Balancing for Large Supercomputers," and communication JISC, pp, 87-91, Vol. 3, No. 1, 2012.
International Journal of High Performance Computing Applications
(IHHPCA), 2010. [10] N, Sran, N. Kaur, “Comparative Analysis of Existing Load Balancing
Techniques in Cloud Computing,” International Journal of
[8] B. Addis, D. Ardagna, B. Panicucci, M. S. Squillante, L.Zhang, “A Engineering Science Invention, pp. 60-63, Vol. 2, No. 1. 2013.
Hierarchical Approach for the Resource Management of Very Large
490
524