Professional Documents
Culture Documents
CCL Exp 3
CCL Exp 3
C24 - 2103134
Experiment 3
To study and implement Bare metal virtualization using Xen
Type 1 hypervisors, also known as bare-metal hypervisors, are installed directly on the
physical hardware of a server and act as the primary operating system. They provide
virtualization services without the need for a host operating system. In cloud computing
environments, type 1 hypervisors play several key functions:
2. Resource Isolation: Type 1 hypervisors ensure that each virtual machine operates in its
isolated environment, preventing interference and resource contention between VMs. This
isolation enhances security and stability in cloud environments.
5. Security Enforcement: Type 1 hypervisors enforce security policies and access controls to
protect virtualized environments. They isolate VMs from each other and provide features
such as secure boot, encrypted VM storage, and virtual networking to enhance security.
6. High Availability and Fault Tolerance: Type 1 hypervisors support features such as live
migration and fault tolerance to ensure high availability of virtualized workloads. They
enable seamless migration of VMs between physical hosts and provide redundancy to
minimize downtime in case of hardware failures.
Explain the following terms : Horizontal and vertical scaling, auto scaling, load
balancing
1. Horizontal scaling refers to provisioning additional servers to meet your needs, often
splitting workloads between servers to limit the number of requests any individual
server is getting. Horizontal scaling in cloud computing means adding additional
instances instead of moving to a larger instance size. Horizontal scaling is much easier
to accomplish without downtime. Horizontal scaling is also easier than vertical
scaling to manage automatically. Limiting the number of requests any instance gets
at one time is good for performance, no matter how large the instance. Provisioning
additional instances also means having greater redundancy in the rare event of an
outage.
2. Vertical scaling refers to adding more or faster CPUs, memory, or I/O resources to an
existing server, or replacing one server with a more powerful server. In a data center,
administrators traditionally achieved vertical scaling by purchasing a new, more
powerful server and discarding or repurposing the old one. Today’s cloud architects
can accomplish AWS vertical scaling and Microsoft Azure vertical scaling by changing
instance sizes. AWS and Azure cloud services have many different instance sizes, so
vertical scaling in cloud computing is possible for everything from EC2 instances to
RDS databases.
3. Auto scaling, also referred to as autoscaling, auto-scaling, and sometimes automatic
scaling, is a cloud computing technique for dynamically allocating computational
resources. Depending on the load to a server farm or pool, the number of servers
that are active will typically vary automatically as user needs fluctuate. Autoscaling is
a cloud computing feature that enables organizations to scale cloud services such as
server capacities or virtual machines up or down automatically, based on defined
situations such as traffic ir utilization levels. Cloud computing providers, such
as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP),
offer autoscaling tools. Core autoscaling features also allow lower cost, reliable
performance by seamlessly increasing and decreasing new instances as demand
spikes and drops. As such, autoscaling provides consistency despite the dynamic and,
at times, unpredictable demand for applications. Autoscaling works in a variety of
ways depending on the platform and resources a business uses. In general, there are
several common attributes across all autoscaling approaches that enable automatic
resource scaling. For compute, memory and network resources, users will first deploy
or define a virtual instance type that has a specified capacity with predefined
performance attributes. That setup is often referred to as a launch configuration --
also known as a baseline deployment. The launch configuration is typically set up
with options determined by what a user expects to need for a given workload, based
on expected CPU use, memory use and network load requirements for typical day-to-
day operations.
4. Cloud load balancing is defined as the method of splitting workloads and computing
properties in a cloud computing. It enables enterprise to manage workload demands
or application demands by distributing resources among numerous computers,
networks or servers. Cloud load balancing includes holding the circulation of
workload traffic and demands that exist over the Internet. As the traffic on the
internet growing rapidly, which is about 100% annually of the present traffic. Hence,
the workload on the server growing so fast which leads to the overloading of servers
mainly for popular web server. There are two elementary solutions to overcome the
problem of overloading on the servers. First is a single-server solution in which the
server is upgraded to a higher performance server. However, the new server may also
be overloaded soon, demanding another upgrade. Moreover, the upgrading process
is arduous and expensive.Second is a multiple-server solution in which a scalable
service system on a cluster of servers is built. That’s why it is more cost effective as
well as more scalable to build a server cluster system for network services.
Write steps of installation for the above mentioned experiment along with screen
shots :