Download as txt, pdf, or txt
Download as txt, pdf, or txt
You are on page 1of 4

Load balancing

Load balancing is the process of subdivision of requests among multiple


resources, division is dictated by some metric (random, round robin, random weight
based on the capacity of the machine, etc.) and the current state of resources.
A moderately large system could balance the load on three levels:
the user your web server
from your web server to a level of inner platform
and finally it to your database

Redundancy
Is the creation of functionally identical or comparable resources of a
technical system for a trouble free fail safe operation.
Types of Redundancy
Hot redundancy means that the system perform the function of multiple
systems in parallel.
Cold redundancy means that in the system, several functions are
available in parallel, but only one is working.
Standby redundancy or passive redundancy adds additional resources
which will be turned on only in case of failure or malfunction in the execution of
the tasks of the already operating unit.
N +1 redundancy means that a system of n number of functional units
which are active at a time and consists of a passive standby unit.

Five essential characteristics of cloud computing:


Self-assignment of benefits from the cloud or by the user, which will be
available when needed (self-service provisioning / As-needed availability).
Scalability by decoupling the use of swings and infrastructure limitations
(scalability).
Reliability and fault tolerance guarantees permanently defined quality
standards of the IT infrastructure for the user (Reliability and fault-tolerance).
Optimization and consolidation, efficiency and economy in adapting to
continuous environmental protection standards, which can be optimized successively
by the cloud service provider (Optimization / Consolidation).

Scalability
Scalability by decoupling the use of swings and infrastructure limitations
(scalability).
Types of Scalability
patial scalability has a system or application when the memory
requirement in an increasing number of elements to be managed not in at an
unacceptably high levels.
Temporal spatial scalability comprises by increasing the number of
objects which a system will not significantly affect its performance.
Structural scalability is distinguished from others, its not an
implementation, increasing the number of objects within a defined area even
hindered significantly.

The cloud’s distributed nature brings its own set of concerns.


Cloud applications must manage uncertainty and non-determinism;
distributed state and communication;
failure detection and recovery;
data consistency and correctness;
message loss,
partitioning,
reordering,
and corruption.

The typical processing flow of a stateless process is:


To receive a request,
retrieve the state from a persistence store, such as a relational database,
make the requested state changes,
store the changed state back into the persistence stores,
and then forget that anything happened

The constraints of the cloud environment, that make up the "cloud operating model,"
include:
Applications are limited in the ability to scale vertically on commodity
hardware which typically leads to having many isolated autonomous services (often
called microservices).
All inter-service communication takes place over unreliable networks.
You must operate under the assumption that the underlying hardware can fail
or be restarted or moved at any time.
The services need to be able to detect and manage failure of their peers—
including partial failures.
Strong consistency and transactions are expensive. Because of the
coordination required, it is difficult to make services that manage data available,
performant, and scalable.

USL scalability
The Three Cs: Concurrency, Contention and Coherency
The three coefficients, α, β, γ, in eqn. (3) can be identified repectively
with the three Cs [Gunther 2018,SF ACM 2018]:
CONCURRENCY or ideal parallelism (with proportionality γ), which can also be
interpreted as either:
the slope associated with linear-rising scalability, i.e., the line X(N) = γ
N in Fig. A when α = β = 0
the maximum throughput attainable with a single load generator, i.e., X(1) =
γ
CONTENTION (with proportionality α) due to waiting or queueing for shared
resources
COHERENCY or data consistency (with proportionality β) due to the delay for
data to become consistent, or cache coherent, by virtue of point-to-point exchange
of data between resources that are distributed
NOTE: When β = 0 and γ = 1, eqn. (3) reduces to Amdahl's law. See Section
1.3.1.

The independent variable N can represent either


Software Scalability:
Here, the number of users or load generators (N) is incremented on a
fixed hardware configuration.
In this case, the number of users acts as the independent variable
while the processor configuration remains fixed over the range of user-load
measurements.
This is the most common situation found in load testing environments
where tools like LoadRunner or Apache JMeter are used.
Hardware Scalability:
Here, the number of physical processors (N) is incremented in the
hardware configuration while keeping the user load per processor fixed.
In this case, the number of users executing per processor (e.g., 100
users per processor) is assumed to remain the same for every added processor.
For example, on a 32 processor platform you would apply a load of N =
3200 users to the test platform.

ACID is an acronym that stands for


atomicity,
consistency,
isolation,
and durability.
Principles of microservices:
Single responsibility
Built around business capabilities
Design for failure:

Design Patterns of Microservices


Aggregator – It invoked services to receive the required information (related
data) from different services,
API Gateway – API Gateway acts as a solution to the request made to
microservices. It serves as an entry point to all the microservices and creates
fine-grained APIs for different clients.
Event Sourcing – Creates events regarding changes (data) in the application
state. Using these events, developers can keep track of records of changes made.
Strangler – Is also known as a Vine pattern since it functions the same way
vine strangles a tree around it. For each URI call, a call goes back and forth and
is also broken down into different domains.
Decomposition – Is decomposing an application into smaller microservices,
that have their own functionality. Based on the business requirements, you can
break an application into sub-components.

Criteria for Choosing a Technology for Microservices


Highly observable
Support for automation
Consumer-first approach
Independent deployment
Modelled around business domain
Decentralization of components
Support for continuous integration

Advantages/Benefits of Microservices
1. Independent Development and Deployment
2. Small Focused Team
3. Small CodeBase
4. Mix of Technologies
5. Fault Isolation
6. Scalability
7. Data Isolation

Challenges/ Disadvantages of Microservices


1. Complexity
2. Testing
3. Data Integrity
4. Network Latency
5. Versioning

The three main components of a microservice include:


Containers
API Gateway
Database

Best Practices
1. You can model the services around the business domain
2. We have discussed that individual teams are assigned for specific
services, so there is no need to share the code or data schemas.
3. For each service data storage should be private.
4. Each service in microservice should communicate through well-designed
APIs.
5. Coupling should be avoided between the services.
6. You should keep domain knowledge out of the gateway.
7. There should be loose coupling and high functional cohesion between the
services.

CAP theorem - a distributed system can only provide two of three properties
simultaneously
consistency,
availability,
and partition tolerance

You might also like