Professional Documents
Culture Documents
Network Load Balance System
Network Load Balance System
CHATTOGRAM
INTERNSHIP REPORT
ON
Network Load Balancing System
SUPERVISED BY
Rimom Barua
Head of the Department
Computer Technology
Islami Bank Institute Of Technology, Chattogram
SUBMITTED BY
I am Md Yasin Arafat & Maha Dev Siva, student of Islami Bank Institute
of Technology, Chattogram. We are declaring that we have completed the
internship in Network Load Balancing System under the supervaisor of
Rimon Barua, Head of Department of Computer Technology. DDN has
been prepared for the partial fulfillment of Practicum of Diploma in
Computer Technology. We are also declaring that this report has not been
Prepared or submitted previously for any other purpose, reward or
presentation by any one rather than me. This is also declared that there is no
plagiarism or data falsification and materials used in this report from various
sources and site.
SUPERVISED BY
Rimom Barua
Head of the Department
Computer Technology
Islami Bank Institute Of Technology, Chattogram
APPROVAL
Supervisor
Rimom Barua
Head of the Department
Computer Technology
Islami Bank Institute of Technology, Chattogram
Supervisor
External Examiner
ACKNOWLEDGMENT
In the name of ALLAH who is the most merciful and the most graceful.
The Internship Report would not be a success, without the constant and
valuable guidance of Engr. Abdul Hye Khan, our supervisor for the
Internship Report, who is rendering all sorts of help as and when required.
We are thankful for his constant constructive criticism and valuable
suggestions, which benefited us a lot while implementing the Internship on
“Internship Report on Network Load Balancing System”. He had been a
constant source of inspiration and motivation for hard work. He had been
very cooperative throughout this Internship work. Through this column, it
would be my utmost pleasure to express our warm thanks to him for his
encouragement, cooperation, and consent.
One can never find the right words to thank one's parents. We are always
indebted for the love and care that our parents have shown to us, and have
done every possible effort to reach us at this stage. We are very lucky to get
such caring and loving parents. We feel the same way for our parents who
have been a source of inspiration for us.
ABSTRACT
Sincerely Yours,
When a request arrives from a user, the load balancer assigns the request to a
given server, and this process repeats for each request. Load balancers
determine which server should handle each request based on a number of
different algorithms. These algorithms fall into two main categories:
1. static and
2. Dynamic.
1.3 Where is load balancing used?
Load balancing is also commonly used within large localized networks, like
those within a data center or a large office complex. Traditionally, this has
required the use of hardware appliances such as an application delivery
controller (ADC) or a dedicated load balancing device. Software-based load
balancers are also used for this purpose.
CHAPTER TWO
1. Static and
2. Dynamic.
Referring back to the analogy above, imagine if the grocery store with 8
open checkout lines has an employee whose job it is to direct customers into
the lines. Imagine this employee simply goes in order, assigning the first
customer to line 1, the second customer to line 2, and so on, without looking
back to see how quickly the lines are moving. If the 8 cashiers all perform
efficiently, this system will work fine — but if one or more is lagging
behind, some lines may become far longer than others, resulting in bad
customer experiences. Static load balancing presents the same risk:
sometimes, individual servers can still become overburdened.
Round robin DNS and client-side random load balancing are two common
forms of static load balancing.
2.1.2 Dynamic load balancing algorithms
Suppose the grocery store employee who sorts the customers into checkout
lines uses a more dynamic approach: the employee watches the lines
carefully, sees which are moving the fastest, observes how many groceries
each customer is purchasing, and assigns the customers accordingly. This
may ensure a more efficient experience for all customers, but it also puts a
greater strain on the line-sorting employee.
This load balancing algorithm does not take into consideration the characteristics of
the application servers i.e. it assumes that all application servers are the same with the
same availability, computing and load handling characteristics.
Weighted Round Robin builds on the simple Round-robin load balancing algorithm to
account for differing application server characteristics. The administrator assigns a
weight to each application server based on criteria of their choosing to demonstrate
the application servers traffic-handling capability. If application server #1 is twice as
powerful as application server #2 (and application server #3), application server #1 is
provisioned with a higher weight and application server #2 and #3 get the same
weight. If there five (5) sequential client requests, the first two (2) go to application
server #1, the third (3) goes to application server #2, the fourth (4) to application
server #3 and the fifth (5) to application server #1.
Least Connection load balancing is a dynamic load balancing algorithm where client
requests are distributed to the application server with the least number of active
connections at the time the client request is received. In cases where application
servers have similar specifications, an application server may be overloaded due to
longer lived connections; this algorithm takes the active connection load into
consideration.
SDN Adaptive is a load balancing algorithm that combines knowledge from Layers 2,
3, 4 and 7 and input from an SDN Controller to make more optimized traffic
distribution decisions. This allows information about the status of the servers, the
status of the applications running on them, the health of the network infrastructure,
and the level of congestion on the network to all play a part in the load balancing
decision making.
Weighted Response Time is a load balancing algorithm where the response times of
the application servers determines which application server receives the next request.
The application server response time to a health check is used to calculate the
application server weights. The application server that is responding the fastest
receives the next request.
Source IP hash load balancing algorithm that combines source and destination IP
addresses of the client and server to generate a unique hash key. The key is used to
allocate the client to a particular server. As the key can be regenerated if the session is
broken, the client request is directed to the same server it was using previously. This
is useful if it’s important that a client should connect to a session that is still active
after a disconnection.
URL Hash is a load balancing algorithm to distribute writes evenly across multiple
sites and sends all reads to the site owning the object.
CHAPTER III
ANALYSIS AND DESIGN
3.1. Analysis
At this final project, implementation the server cluster architecture with load
balancing method and database failover system. First, the server cluster
architecture always should be able work to serve the client requests without
missing a little time due to server failure. To solve this problem, the design
used two units server for each assignment. On the web server, there is Nginx
as load balancer software. The reason for using Nginx, because Nginx is load
balancer software that free. Nginx is also popular and get lots of support
from the worldwide community. Compared with load balancer hardware,
load balancer software need lower cost. Nginx as load balancer will divide
all of web server which use apache service by turns using Round-Robin
algorithm. If the first web server get problem, the web application still
served by the web second server. On this case the Load Balancer does not
make distribution on all of web servers, requests from clients directly
handled by second web server which still turn on and the opposite. If one of
the web server who get problem has been resolved, all of web server will be
working by turns again are like the default setting.
On “Figure 1” seen every task executed by two servers. Requests from the
client will be served by Nginx who acted as the Load Balancer software. IP
address on the Nginx represents all IP addresses available on the server
cluster architecture. Nginx will distribute the load request to the Apache web
server who behind it. The web applications are also identical installed on all
of Apache web server.
For the data purposes, the web application get it from a database server. All
of Apache web server connect to database server using a single virtual IP
address. The virtual IP address will be at one of the active master database
server. This virtual IP address management by Heartbeat. Heartbeat installed
on all database servers. All of database server active run MySQL Master to
Master Replication in realtime.
CONCLUSION
1. https://kemptechnologies.com
2. https://www.cloudflare.com/