Download as pdf or txt
Download as pdf or txt
You are on page 1of 23

ISLAMI BANK INSTITUTE OF TECHNOLOGY

CHATTOGRAM

INTERNSHIP REPORT
ON
Network Load Balancing System

SUPERVISED BY
Rimom Barua
Head of the Department
Computer Technology
Islami Bank Institute Of Technology, Chattogram

SUBMITTED BY

Name: Md Yasin Arafat Name: Maha Dev Siva


Roll: 977967 Roll: 9779**
Reg: 816803 Reg: 8168**
Dept. of Computer Dept. of Computer
Islami Bank Institute of Islami Bank Institute of
Technology, Chattogram Technology, Chattogram

Date of Submission: **/**/2022


DECLARATION

I am Md Yasin Arafat & Maha Dev Siva, student of Islami Bank Institute
of Technology, Chattogram. We are declaring that we have completed the
internship in Network Load Balancing System under the supervaisor of
Rimon Barua, Head of Department of Computer Technology. DDN has
been prepared for the partial fulfillment of Practicum of Diploma in
Computer Technology. We are also declaring that this report has not been
Prepared or submitted previously for any other purpose, reward or
presentation by any one rather than me. This is also declared that there is no
plagiarism or data falsification and materials used in this report from various
sources and site.

Name: Md Yasin Arafat Name: Maha Dev Siva


Roll: 977967 Roll: 9779**
Reg: 816803 Reg: 8168**
Dept. of Computer Dept. of Computer
Islami Bank Institute of Islami Bank Institute of
Technology, Chattogram Technology, Chattogram

SUPERVISED BY
Rimom Barua
Head of the Department
Computer Technology
Islami Bank Institute Of Technology, Chattogram
APPROVAL

This Internship titled “Network Load Balancing System”, submitted by


Md Yasin Arafat & Maha Dev Siva to the Department of Computer
Technology, Islami Bank Institute of Technology has been accepted as
satisfactory for the partial fulfillment of the requirements for the degree of
Diploma In Computer Technology and approved as to its style and contents.

Supervisor
Rimom Barua
Head of the Department
Computer Technology
Islami Bank Institute of Technology, Chattogram

Supervisor
External Examiner
ACKNOWLEDGMENT

In the name of ALLAH who is the most merciful and the most graceful.

Firstly we would like to thank my supervisor Rimon Barua, Head of


Department, Department of Computer Technology. We are extremely
grateful and indebted to her for his expert, sincere and valuable guidance and
encouragement to us.

The Internship Report would not be a success, without the constant and
valuable guidance of Engr. Abdul Hye Khan, our supervisor for the
Internship Report, who is rendering all sorts of help as and when required.
We are thankful for his constant constructive criticism and valuable
suggestions, which benefited us a lot while implementing the Internship on
“Internship Report on Network Load Balancing System”. He had been a
constant source of inspiration and motivation for hard work. He had been
very cooperative throughout this Internship work. Through this column, it
would be my utmost pleasure to express our warm thanks to him for his
encouragement, cooperation, and consent.

One can never find the right words to thank one's parents. We are always
indebted for the love and care that our parents have shown to us, and have
done every possible effort to reach us at this stage. We are very lucky to get
such caring and loving parents. We feel the same way for our parents who
have been a source of inspiration for us.
ABSTRACT

Load balancing is a way to spread tasks out over multiple resources. By


processing tasks and directing sessions on different servers, load balancing
helps a network avoid annoying downtime and delivers optimal performance
to users. There are virtual load balancing solutions that work in a manner
similar to virtual applications or server environments. There are also
physical load balancing hardware solutions that can be integrated with a
network. The method used depends entirely upon the team implementing the
solution and their particular needs. Network Load Balancing (NLB) is a
clustering technology offered by Microsoft as part of all Server and
Windows Server 2003 family operating. NLB uses a distributed algorithm to
load balance network traffic across a number of hosts, helping to enhance the
scalability and availability of mission critical, IP -based services, such as
Web, virtual private networking, streaming media, terminal services, proxy
and so on. It also provides high availability by detecting host failures and
automatically redistributing traffic to operational hosts. This paper describes
the detailed architecture of network load balancing, various types of
addressing and the various performance measures.
LETTER OF TRANSMITTAL
February 20, 2022
To
Rimon Barua
Head of the Department
Department of Computer Technology
Islami Bank Institute of Technology

Subject: Submission of the Internship Report.


Dear Sir,
With due respect, We would like to approach you this is a great opportunity
to submit my internship report on the experience gained during my
internship period in Network Load Balancing System at DDN. I have
prepared this report in accordance with the instructions given by you and the
institute.
Working in DDN, it was inspiring and a great learning experience for us. We
hope this knowledge will facilitate us a lot in our future career. This period
has given the enough opportunity to actualize our theoretical knowledge into
practical corporate environment. We have provided my best effort to this
report.
We sincerely hope that we are grateful to you if you kindly go through this
report and evaluate my performance. We hope that you would appreciate the
project report.

Sincerely Yours,

Name: Md Yasin Arafat


Roll: 977967, Reg: 816803
&
Name: Maha Dev Siva
Roll: 9779**, Reg: 8168**
Dept. of Computer
Islami Bank Institute of Technology, Chattogram.
Chapter One
Introduction
1.1 Introduction:

Load balancing is the practice of distributing computational workloads


between two or more computers. On the Internet, load balancing is often
employed to divide network traffic among several servers. This reduces the
strain on each server and makes the servers more efficient, speeding up
performance and reducing latency. Load balancing is essential for most
Internet applications to function properly.
Imagine a checkout line at a grocery store with 8 checkout lines, only one of
which is open. All customers must get into the same line, and therefore it
takes a long time for a customer to finish paying for their groceries. Now
imagine that the store instead opens all 8 checkout lines. In this case, the
wait time for customers is about 8 times shorter (depending on factors like
how much food each customer is buying).

Load balancing essentially accomplishes the same thing. By dividing user


requests among multiple servers, user wait time is vastly cut down. This
results in a better user experience — the grocery store customers in the
example above would probably look for a more efficient grocery store if they
always experienced long wait times.

1.2 How does load balancing work?

Load balancing is handled by a tool or application called a load balancer. A


load balancer can be either hardware-based or software-based. Hardware
load balancers require the installation of a dedicated load balancing device;
software-based load balancers can run on a server, on a virtual machine, or
in the cloud. Content delivery networks (CDN) often include load balancing
features.

When a request arrives from a user, the load balancer assigns the request to a
given server, and this process repeats for each request. Load balancers
determine which server should handle each request based on a number of
different algorithms. These algorithms fall into two main categories:

1. static and
2. Dynamic.
1.3 Where is load balancing used?

As discussed above, load balancing is often used with web applications.


Software-based and cloud-based load balancers help distribute Internet
traffic evenly between servers that host the application. Some cloud load
balancing products can balance Internet traffic loads across servers that are
spread out around the world, a process known as global server load
balancing (GSLB).

Load balancing is also commonly used within large localized networks, like
those within a data center or a large office complex. Traditionally, this has
required the use of hardware appliances such as an application delivery
controller (ADC) or a dedicated load balancing device. Software-based load
balancers are also used for this purpose.
CHAPTER TWO

LOAD BALANCING ALGORITHMS AND


TECHNIQUES
2.1 Types of Load Balancing Algorithms:

These algorithms fall into two main categories:

1. Static and
2. Dynamic.

2.1.1 Static load balancing algorithms

Static load balancing algorithms distribute workloads without taking into


account the current state of the system. A static load balancer will not be
aware of which servers are performing slowly and which servers are not
being used enough. Instead it assigns workloads based on a predetermined
plan. Static load balancing is quick to set up, but can result in inefficiencies.

Referring back to the analogy above, imagine if the grocery store with 8
open checkout lines has an employee whose job it is to direct customers into
the lines. Imagine this employee simply goes in order, assigning the first
customer to line 1, the second customer to line 2, and so on, without looking
back to see how quickly the lines are moving. If the 8 cashiers all perform
efficiently, this system will work fine — but if one or more is lagging
behind, some lines may become far longer than others, resulting in bad
customer experiences. Static load balancing presents the same risk:
sometimes, individual servers can still become overburdened.

Round robin DNS and client-side random load balancing are two common
forms of static load balancing.
2.1.2 Dynamic load balancing algorithms

Dynamic load balancing algorithms take the current availability, workload,


and health of each server into account. They can shift traffic from
overburdened or poorly performing servers to underutilized servers, keeping
the distribution even and efficient. However, dynamic load balancing is more
difficult to configure. A number of different factors play into server
availability: the health and overall capacity of each server, the size of the
tasks being distributed, and so on.

Suppose the grocery store employee who sorts the customers into checkout
lines uses a more dynamic approach: the employee watches the lines
carefully, sees which are moving the fastest, observes how many groceries
each customer is purchasing, and assigns the customers accordingly. This
may ensure a more efficient experience for all customers, but it also puts a
greater strain on the line-sorting employee.

There are several types of dynamic load balancing algorithms, including


least connection, weighted least connection, resource-based, and
geolocation-based load balancing.

2.2 Network Load Balancing Algorithms:

The different types of Network Load Balancing algorithms are described


below

2.2.1 Round Robin


Round-robin load balancing is one of the simplest and most used load balancing
algorithms. Client requests are distributed to application servers in rotation. For
example, if you have three application servers: the first client request to the first
application server in the list, the second client request to the second application server,
the third client request to the third application server, the fourth to the first application
server and so on.

This load balancing algorithm does not take into consideration the characteristics of
the application servers i.e. it assumes that all application servers are the same with the
same availability, computing and load handling characteristics.

2.2.2 Weighted Round Robin

Weighted Round Robin builds on the simple Round-robin load balancing algorithm to
account for differing application server characteristics. The administrator assigns a
weight to each application server based on criteria of their choosing to demonstrate
the application servers traffic-handling capability. If application server #1 is twice as
powerful as application server #2 (and application server #3), application server #1 is
provisioned with a higher weight and application server #2 and #3 get the same
weight. If there five (5) sequential client requests, the first two (2) go to application
server #1, the third (3) goes to application server #2, the fourth (4) to application
server #3 and the fifth (5) to application server #1.

2.2.3 Least Connection

Least Connection load balancing is a dynamic load balancing algorithm where client
requests are distributed to the application server with the least number of active
connections at the time the client request is received. In cases where application
servers have similar specifications, an application server may be overloaded due to
longer lived connections; this algorithm takes the active connection load into
consideration.

2.2.4 Weighted Least Connection


Weighted Least Connection builds on the Least Connection load balancing algorithm
to account for differing application server characteristics. The administrator assigns a
weight to each application server based on criteria of their choosing to demonstrate
the application servers traffic-handling capability. The LoadMaster is making the load
balancing criteria based on active connections and application server weighting.

2.2.5 Resource Based (Adaptive)

Resource Based (Adaptive) is a load balancing algorithm requires an agent to be


installed on the application server that reports on its current load to the load balancer.
The installed agent monitors the application servers availability status and resources.
The load balancer queries the output from the agent to aid in load balancing decisions.

2.2.6 Resource Based (SDN Adaptive)

SDN Adaptive is a load balancing algorithm that combines knowledge from Layers 2,
3, 4 and 7 and input from an SDN Controller to make more optimized traffic
distribution decisions. This allows information about the status of the servers, the
status of the applications running on them, the health of the network infrastructure,
and the level of congestion on the network to all play a part in the load balancing
decision making.

2.2.7 Fixed Weighting

Fixed Weighting is a load balancing algorithm where the administrator assigns a


weight to each application server based on criteria of their choosing to demonstrate
the application servers traffic-handling capability. The application server with the
highest weigh will receive all of the traffic. If the application server with the highest
weight fails, all traffic will be directed to the next highest weight application server.
2.2.8 Weighted Response Time

Weighted Response Time is a load balancing algorithm where the response times of
the application servers determines which application server receives the next request.
The application server response time to a health check is used to calculate the
application server weights. The application server that is responding the fastest
receives the next request.

2.2.9 Source IP Hash

Source IP hash load balancing algorithm that combines source and destination IP
addresses of the client and server to generate a unique hash key. The key is used to
allocate the client to a particular server. As the key can be regenerated if the session is
broken, the client request is directed to the same server it was using previously. This
is useful if it’s important that a client should connect to a session that is still active
after a disconnection.

2.2.10 URL Hash

URL Hash is a load balancing algorithm to distribute writes evenly across multiple
sites and sends all reads to the site owning the object.
CHAPTER III
ANALYSIS AND DESIGN
3.1. Analysis

At this final project, implementation the server cluster architecture with load
balancing method and database failover system. First, the server cluster
architecture always should be able work to serve the client requests without
missing a little time due to server failure. To solve this problem, the design
used two units server for each assignment. On the web server, there is Nginx
as load balancer software. The reason for using Nginx, because Nginx is load
balancer software that free. Nginx is also popular and get lots of support
from the worldwide community. Compared with load balancer hardware,
load balancer software need lower cost. Nginx as load balancer will divide
all of web server which use apache service by turns using Round-Robin
algorithm. If the first web server get problem, the web application still
served by the web second server. On this case the Load Balancer does not
make distribution on all of web servers, requests from clients directly
handled by second web server which still turn on and the opposite. If one of
the web server who get problem has been resolved, all of web server will be
working by turns again are like the default setting.

All of Apache web server is running without database services, so it need


service from database server which stand-alone. There are two database
servers with MySQL service can handle the data requirements from web
applications. Reasons for not using single database server to avoid paralysis
of data services when the database server to crash or downtime. And also
become the reason to make data backup on a backup database server to avoid
data being loss and data get damage permanently.

All of MySQL database server using failover system is handled by Heartbeat


software to resolve this issue. Reason for using the Heartbeat for heartbeat is
a software that free and easy to get. On failover systems, all of MySQL
database server is not working together. The first database server will be set
as the master server and the second database server as a backup server. If the
first database server (master server) having problems then the second
database server (backup server) will replace the role of the first server
(master server). And if the first database server back to work normally, all of
database server will works return to the default settings.

MySQL replication tasked to make all of database to be identical. Similarly


with failover, MySQL replication make all of database server work
interchangeably according the conditions. The first server (master database)
will receive query commands from web applications, so the first database
server as master will replicate data to a second database server which as
slave. When the second database server become master database server, it
should automatically be master which replicate the data to the first database
server after the problem on first database server is fixed.
3.2. Design

Figure 1: Design for Server Cluster Architecture

On “Figure 1” seen every task executed by two servers. Requests from the
client will be served by Nginx who acted as the Load Balancer software. IP
address on the Nginx represents all IP addresses available on the server
cluster architecture. Nginx will distribute the load request to the Apache web
server who behind it. The web applications are also identical installed on all
of Apache web server.

For the data purposes, the web application get it from a database server. All
of Apache web server connect to database server using a single virtual IP
address. The virtual IP address will be at one of the active master database
server. This virtual IP address management by Heartbeat. Heartbeat installed
on all database servers. All of database server active run MySQL Master to
Master Replication in realtime.
CONCLUSION

Network Load Balancing is superior to other software solutions such as


round robin DNS (RRDNS), which distributes workload among multiple
servers but does not provide a mechanism for server availability. If a server
within the host fails, RRDNS, unlike Network Load Balancing, will continue
to send it work until a network administrator detects the failure and removes
the server from the DNS address list. This results in service disruption for
clients. Network Load Balancing also has advantages over other load
balancing solutions—both hardware- and software-based—that introduce
single points of failure or performance bottlenecks by using a centralized
dispatcher. Because Network Load Balancing has no proprietary hardware
requirements, any industry-standard compatible computer can be used. This
provides significant cost savings when compared to proprietary hardware
load balancing solutions.
References:

1. https://kemptechnologies.com
2. https://www.cloudflare.com/

You might also like