Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

2013 European Modelling Symposium

Performance Analysis of Load Balancing Architectures in Cloud Computing

Ektemal Al-Rayis Heba Kurdi


Computer Science Department Computer Science Department
Imam Muhammad Ibn Saud Islamic University Imam Muhammad Ibn Saud Islamic University
Riyadh, Saudi Arabia Riyadh, Saudi Arabia
Ektemal.a.r@gmail.com hakurdi@imamu.edu.sa

Abstract—The Cloud computing is a rapidly emerging In general terms, the load balancing is a technique to
distributed system paradigm that offers a huge amount of IT distribute the workload of the system across multiple servers
resources as utility services at a reduced cost and flexible to ensure that no server is busy handling a heavy workload
schemes. The key of such flexibility is an efficient load balancer while another server is idle. Therefore, a load balancer can
that offers better management and utilization of virtualized be considered as a reverse proxy that distributes network or
underlying cloud infrastructures. However, most of the application traffic across a number of servers. Load
existing load balancers in cloud computing are based on either balancers promote achieving three main objectives. First,
centralized or fully distributed architectures while the idea of
improving overall system performance by reaching high
harnessing multiple load balancers in a hierarchical structure
resource utilization ratio. Second, avoiding system
to improve the sever load and job response time is still under
studied. Therefore, this paper, aims at bridging this gap by
bottleneck that occurs due to load imbalance. Finally,
providing a comparative study between the three load achieving high providers’ and users’ satisfaction by striving
balancing architectures in cloud computing: centralized, to increase the system throughput and decrease the job
decentralized and hierarchical load balancers. The processing time [2].
experimental results suggest that the hierarchical architecture Basically, load balancers can be deployed based on three
for load balancers best suits the public cloud environment and different architectures: The centralized load balancing
call for further research to test whether these results can be architecture which includes a central load balancer to make
generalized for other types of clouds. the decision for the entire system regarding which cloud
resource should take what workload and based on which
Keyword s- component; cloud computing; load balancing;
algorithm(s) [3]. This architecture has the known advantage
simulation
of robust management scheme but suffers from poor
scalability and constitutes a single point of failure.
I. INTRODUCTION The decentralized load balancing architecture has no
The boom in cloud computing has been rather central load balancer to distribute workload among available
exceptional; in just five years cloud computing has reshaped resources; instead, job requests are divided on arrival,
the way ICT service are provided and consumed. There are equally among many load balancers where each of them may
as many definitions, classifications and technologies for run a different algorithm to allocate jobs to resources. This
cloud computing as the number of institutions adopting it, architecture offers great flexibility and scalability. On the
and this number is on the rise. Cloud computing is defined other hand, it yields poor load balance among underlying
by the US Government’s National Institute of Standards and resources [8].
Technology (NIST) [1] as an ICT sourcing and delivery In the hierarchical load balancing architecture, a main load
model for enabling convenient, on-demand network access balancer (parent) receives all job requests, and then it spreads
to a shared pool of configurable computing resources (e.g. them to other connected load balancers (children) where each
networks, servers, storage, applications and services) that can load balancer in the tree may use a different algorithm.
be rapidly provisioned and released with minimal Although this architecture combines the advantages of both
management efforts and service provider interactions. previous schemes, centralized and decentralized, while
Virtualization technologies are the key enabler of cloud overcoming their disadvantages, hierarchical load balancers
computing by giving providers a flexible way of managing are more difficult to implement and include additional
their resources. Virtual infrastructure (VI) management is a overheads to organize between the load balancers themselves
key concern when building cloud environments and poses a The special characteristics of cloud environments that
number of challenges. Among the main challenges in VI result from the complexities of cloud computing virtual
infrastructure require advanced load balancing solutions that
management is to implement an efficient load balancer
are capable of dynamically adapting the cloud platform
capable of evenly distribute the workload among available while providing continuous service and performance
cloud resources. guarantees [9]. These difficulties have created different
views on which load balancing architecture would better
suites cloud computing. Therefore, the goal of this paper was

978-1-4799-2578-0/13 $31.00 © 2013 IEEE 486


520
DOI 10.1109/EMS.2013.10
to carry out an evaluation study for the cloud environment to balancing architecture: centralized, decentralized or
help in getting better insight into the problem. hierarchical architecture will give the best results in terms of
To satisfy this goal, a public cloud was simulated at response time and server load?
different scales and the performance of the system, in terms To answer this question a robust evaluation framework
of response time and server load, was measured under the [7] was implemented which includes the following steps:
three possible load balancing architectures: centralized, x Identifying the critical elements in the design of
decentralized and hierarchical. The experimental results cloud load balancers which are: number of nodes,
suggested that the hierarchical architecture for load balancers load balancing algorithm and the architecture of the
best suits the cloud environment which can be attributed to load balancers.
the ability of this architecture to split the load balancing x Varying the number of nodes, in order to simulate
overhead among many load balancers running various representative sample of cloud environments.
algorithms that can supplement each other while maintaining x Selecting simple load balancing algorithms, Round
some nature of centralized management. Robin algorithm and opportunistic load balancing
The rest of this paper is organized as follows. Section II algorithm, to enable assessing the efficiency of each
briefly reviews the few related work that aims at comparing load balancing architecture:
between load balancing approaches in cloud computing. The x Identifying suitable performance measures: response
research methodology and experimental sittings are time and server load.
illustrated in section III. In section IV, simulation results are x Implementing three identical scenarios, one for each
presented and discussed. Finally, section V concludes the load balancing architecture: centralized,
paper indicating possible future research directions. decentralized and hierarchical architecture.
II. RELATED WORK
x Running the simulations and collecting the results.
x Repeating the simulation several times to improve
Load balancing is an active field in cloud computing and the accuracy and accept the mean outcome.
there are many techniques that have been presented in the
literature sharing common goal but from diverse The above experimental framework was implemented
perspectives. For instance, [4] presented a technique that using the network simulator Opnet Modeler and run in an HP
allow each overloaded server to redirect the requests to laptop with Intel Core i5-3210M CPU, 2.50GHz, 6 GB
lightly loaded servers. The maximum limit of redirecting RAM, Windows 7 of 64-bit as an operating system.
requests is imposed by Limited Redirection Rate protocol Simulation settings for the three scenarios are illustrated in
where it defines the limit according to the exceeding capacity Table 1. The model included four servers, one public cloud
of the lightly loaded server and the latencies among the delivering Software-as-a-Service (SaaS) that offers database
servers. In [5], CARTON which is a mechanism that unifies applications to support heavy load queries. The number of
the use of load balancing and Distributed Rate Limiting clients varied in the range [10-90]. The centralized
(DRL) were presented. So while the load balancer architecture included only one load balancer, the
distributes the load to different servers, the DRL makes sure decentralized included two load balancers and the hierarchical
that the resource allocation is fair. A scheduling algorithm included three load balancers. The algorithms that manage
that consist of two phases of load balancing was proposed in load balancers in the three scenarios are shown in Table II
[6]. The first is Opportunistic Load Balancing (OLB) where while the model for each scenario: centralized, decentralized
it strives to keep nodes busy, to increase its resource and hierarchical architecture are presented in Fig. 1, Fig 2 and
utilization. The second is Load Balance Min-Min (LBMM) Fig. 3 respectively.
which assigns the job with the minimum completion time to
TABLE I. SIMULATIONR SETUP
the available node.
A systematic review of existing load balancers in cloud Parameter Value
computing was presented in [10]. The study investigated Cloud Delivery Models Software as a Service (SaaS)
seventeen load balancers and highlighted their advantages
Service Type Database
and disadvantages. Fifteen criteria including the server load
and response time among others were considered by the Application Type Heavy load database query
study. In [11], common load balancing algorithms were Cloud Deployment Model Public
analyzed to compare between them when utilized in cloud Simulation time 3600 seconds
computing environments. Specifically honeybee foraging Number of servers 4 servers
algorithm, biased random sampling, active clustering, OLB + Number of clients 10, 50 and 90 clients
LBMM, Min-Min and Max-Min algorithms were considered
Number of load balancers Decentralized 2 load balancers
by the study. However, the architecture of each examined
Centralized 1 load balancer
load balancer were not stated by both studies which is a
major concern. Hierarchical 3 load balancers

III. RESEARCH MTHODOLOGY TABLE II. LOAD BALANCERS ALGRITHMS

The main objective of this paper was to answer the Scenario Load Balancer Algorithm
question: in identical cloud environments, which load

487
521
Decentralized load_balancer-1 Opportunistic IV. RESULTS AND DISCUSSION
The simulation run to represent one hour of real life
load_balancer_2 Round Robin
(3600 sec.). The results of each performance measure are
Centralized load_balancer Round Robin
presented in the following subsections

Hierarchical load_balancer_P Round Robin

load_balancer_C1 Opportunistic

load_balancer_C2 Opportunistic

Figure 3. Hierarchical load balancer Architecture

A. Response Time
Fig. 4 shows the results of the simulation in term of
response time. The horizontal axis represents the number of
clients and the vertical axis represents the response time in
seconds. In the three architectures, the response time
Figure 1. Decentralized load balancer Architecture increased dramatically with the increase in the number of
clients. Both decentralized and centralized load balancers,
have almost the same response time, while the hierarchical
load balancer outperformed them giving notably less
response time.

Figure 4. Response time in each load balancing architecture

Figure 2. Centralized load balancer Architecture B. Server Load


Fig. 5- Fig.7 illustrates the average of the server load
computes as the number of running requests by each server
per second (requests/sec.) when the number of clients was
10, 50 and 90 respectively. Investigating the three graphs
shows clearly the superiority of the hierarchical architecture

488
522
for load balancers in cloud computing. In this architecture, V. CONCLUSION
all servers were considerably lighter in load than in the two Cloud computing is a relatively new IT paradigm that
other scenarios. This can be attributed to the ability of this offers huge amount of resources at reasonable cost. The
architecture to split the load balancing overhead among special characteristics of cloud environments and the
many load balancers running various algorithms that can dynamic nature of its virtual infrastructure call for efficient
supplement each other while maintaining some nature of load balancing solutions that are capable of maintaining low
centralized management. values for response time and server loads [9]. Among the
For decentralized and centralized load balancers, their critical factors the affects the performance of a load balancer
servers show approximately similar load as the number of is its architecture which can be decentralized, centralized or
nodes increased. hierarchical. This paper carried out a comparative study
between the three architectures and how they affect the
cloud performance.
A simulated model for a public cloud has been built for
this purpose at different scales and the system performance
was measured under the three possible load balancing
architectures. The experimental results illustrated the
dominant performance of the hierarchical architecture for load
balancers due to its ability to split the load balancing
overhead among many load balancers running various
algorithms that can supplement each other while maintaining
some nature of centralized management over the cloud.
For future work, we are interested in testing if these
results cab generalized to other type of clouds, i.e. private
Figure 5. load of each server in each load balancing architecture when
and hybrid clouds, as well as other cloud deployment models
the number of clients was 10 such as IaaS and PaaS. We are also interested in applying
more advanced load balancing algorithms e.g. honeybee and
ant colony and see how they affect the load distribution
among cloud servers.
ACKNOWLEDGMENT
This work was funded by the Long-Term Comprehensive
National Plan for Science, Technology and Innovation of the
Kingdom of Saudi Arabia, grant number 11- INF1895-08.

REFERENCES

[1] P. Mell, T. Grance, " The NIST Definition of Cloud Computing."


National Institute of Standards and Technology, Internet:
http://csrc.nist.gov/publications/nistpubs/800-145/SP800-145.pdf,
Figure 6. load of each server in each load balancing architecture when [December 11th, 2012].
the number of clients was 50
[2] N. Kansal, I. Chana. "Cloud Load Balancing Techniques: A Step
Towards Green Computing." IJCSI International Journal of
Computer Science Issues, Vol. 9, No 1, pp. 238-246, January 2012.
[3] Gaurav R. et al. “Comparative Analysis of Load Balancing
Algorithms in Cloud Computing.” International Journal of Advanced
Research in Computer Engineering & Technology, Vol. 1, No. 3, pp.
120-124, May 2012.
[4] Nakai, A.M.; Madeira, E.; Buzato, L.E.; , "Load Balancing for
Internet Distributed Services Using Limited Redirection Rates,"
Dependable Computing (LADC), 2011, pp.156-165.
[5] R. Stanojevic, and R. Shorten, “Load balancing vs. distributed rate
limiting: a unifying framework for cloud control”, Proceedings of
IEEE International Conference on Communication ( ICC), Dresden,
Germany, August 2009, pages 1-6.
[6] S. Wang, K. Yan, W. Liao, and S. Wang, “Towards a Load Balancing
in a Three-level Cloud Computing Network”, Proceedings of the 3rd
IEEE International Conference on Computer Science and
Figure 7. load of each server in each load balancing architecture when Information Technology (ICCSIT), Chengdu, China, September 2010,
the number of clients was 90 pages 108-113.

489
523
[7] H. kurdi, H. S. Al-Raweshidy and M. Li “Design and Evaluation of A Cloud Platforms, IEEE Transactions On Dependable And Secure
Personal Mobile Grid” in Proc. of the International Conference on Advanced Computing, pp. 253-272, Vol. 10, No. 5, 2013.
IT, engineering and Management AIM 2012 , Jul. 2012, Jeju, Korea. E. M. [9] N. Kansal, I. Chana, "Existing Load Balancing Techniques In Cloud
Gengbin Zheng, Abhinav Bhatele and L. V. Kale, "Periodic Computing: A Systematic Re-View", Journal of Information system
Hierarchical Load Balancing for Large Supercomputers," and communication JISC, pp, 87-91, Vol. 3, No. 1, 2012.
International Journal of High Performance Computing Applications
(IHHPCA), 2010. [10] N, Sran, N. Kaur, “Comparative Analysis of Existing Load Balancing
Techniques in Cloud Computing,” International Journal of
[8] B. Addis, D. Ardagna, B. Panicucci, M. S. Squillante, L.Zhang, “A Engineering Science Invention, pp. 60-63, Vol. 2, No. 1. 2013.
Hierarchical Approach for the Resource Management of Very Large

490
524

You might also like