Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 74

What is virtualization?

Virtualization is the creation of a virtual -- rather than actual -- version of something,


such as an operating system (OS), a server, a storage device or network resources.

Virtualization uses software that simulates hardware functionality to create a virtual


system. This practice allows IT organizations to operate multiple operating systems,
more than one virtual system and various applications on a single server. The benefits
of virtualization include greater efficiencies and economies of scale.

OS virtualization is the use of software to allow a piece of hardware to run multiple


operating system images at the same time. The technology got its start on mainframes
decades ago, allowing administrators to avoid wasting expensive processing power.

How virtualization works


Virtualization describes a technology in which an application, guest OS or data
storage is abstracted away from the true underlying hardware or software.

A key use of virtualization technology is server virtualization, which uses a software


layer -- called a hypervisor</a -- to emulate the underlying hardware. This often
includes the CPU's memory, input/output (I/O) and network traffic.

Hypervisors take the physical resources and separate them so they can be utilized by
the virtual environment. They can sit on top of an OS or they can be directly installed
onto the hardware. The latter is how most enterprises virtualize their systems.

The Xen hypervisor is an open source software program that is responsible for
managing the low-level interactions that occur between virtual machines (VMs) and
the physical hardware. In other words, the Xen hypervisor enables the simultaneous
creation, execution and management of various virtual machines in one physical
environment.
With the help of the hypervisor, the guest OS, normally interacting with true
hardware, is now doing so with a software emulation of that hardware; often, the guest
OS has no idea it's on virtualized hardware.

While the performance of this virtual system is not equal to the performance of the
operating system running on true hardware, the concept of virtualization works
because most guest operating systems and applications don't need the full use of the
underlying hardware.

This allows for greater flexibility, control and isolation by removing the dependency
on a given hardware platform. While initially meant for server virtualization, the
concept of virtualization has spread to applications, networks, data and desktops.

A side-
by-side view of a traditional versus a virtual architecture

The virtualization process follows the steps listed below:

1. Hypervisors detach the physical resources from their physical


environments.
2. Resources are taken and divided, as needed, from the physical environment
to the various virtual environments.

3. System users work with and perform computations within the virtual
environment.

4. Once the virtual environment is running, a user or program can send an


instruction that requires extra resources form the physical environment. In
response, the hypervisor relays the message to the physical system and
stores the changes. This process will happen at an almost native speed.

The virtual environment is often referred to as a guest machine or virtual machine.


The VM acts like a single data file that can be transferred from one computer to
another and opened in both; it is expected to perform the same way on every
computer.
What is virtualization?
Published March 2, 2018•5-minute read
Copy URL

JUMP TO SECTION
 Overview
 History of virtualization
 How does it work?
 Types of virtualization
 Why migrate to Red Hat?

Overview
Virtualization is technology that lets you create useful IT services using resources that
are traditionally bound to hardware. It allows you to use a physical machine’s full
capacity by distributing its capabilities among many users or environments.

In more practical terms, imagine you have 3 physical servers with individual dedicated
purposes. One is a mail server, another is a web server, and the last one runs internal
legacy applications. Each server is being used at about 30% capacity—just a fraction of
their running potential. But since the legacy apps remain important to your internal
operations, you have to keep them and the third server that hosts them, right?

Traditionally, yes. It was often easier and more reliable to run individual tasks on
individual servers: 1 server, 1 operating system, 1 task. It wasn’t easy to give 1 server
multiple brains. But with virtualization, you can split the mail server into 2 unique ones
that can handle independent tasks so the legacy apps can be migrated. It’s the same
hardware, you’re just using more of it more efficiently.

Keeping security in mind, you could split the first server again so it could handle another
task—increasing its use from 30%, to 60%, to 90%. Once you do that, the now empty
servers could be reused for other tasks or retired altogether to reduce cooling and
maintenance costs.
A brief history of virtualization
While virtualization technology can be sourced back to the 1960s, it wasn’t widely
adopted until the early 2000s. The technologies that enabled virtualization—
like hypervisors—were developed decades ago to give multiple users simultaneous
access to computers that performed batch processing. Batch processing was a popular
computing style in the business sector that ran routine tasks thousands of times very
quickly (like payroll).

But, over the next few decades, other solutions to the many users/single machine
problem grew in popularity while virtualization didn’t. One of those other solutions was
time-sharing, which isolated users within operating systems—inadvertently leading
to other operating systems like UNIX, which eventually gave way to Linux®. All the
while, virtualization remained a largely unadopted, niche technology.

Fast forward to the the 1990s. Most enterprises had physical servers and single-vendor
IT stacks, which didn’t allow legacy apps to run on a different vendor’s hardware. As
companies updated their IT environments with less-expensive commodity servers,
operating systems, and applications from a variety of vendors, they were bound to
underused physical hardware—each server could only run 1 vendor-specific task.

This is where virtualization really took off. It was the natural solution to 2 problems:
companies could partition their servers and run legacy apps on multiple operating
system types and versions. Servers started being used more efficiently (or not at all),
thereby reducing the costs associated with purchase, set up, cooling, and maintenance.

Virtualization’s widespread applicability helped reduce vendor lock-in and made it the
foundation of cloud computing. It’s so prevalent across enterprises today that
specialized virtualization management software is often needed to help keep track of it
all.

How does virtualization work?


Software called hypervisors separate the physical resources from the virtual
environments—the things that need those resources. Hypervisors can sit on top of an
operating system (like on a laptop) or be installed directly onto hardware (like a server),
which is how most enterprises virtualize. Hypervisors take your physical resources and
divide them up so that virtual environments can use them.
Resources are partitioned as needed from the physical environment to the many virtual
environments. Users interact with and run computations within the virtual environment
(typically called a guest machine or virtual machine). The virtual machine functions as a
single data file. And like any digital file, it can be moved from one computer to another,
opened in either one, and be expected to work the same.

When the virtual environment is running and a user or program issues an instruction
that requires additional resources from the physical environment, the hypervisor relays
the request to the physical system and caches the changes—which all happens at close
to native speed (particularly if the request is sent through an open source hypervisor
based on KVM, the Kernel-based Virtual Machine).
Types of virtualization
Data virtualization

Data that’s spread all over can be consolidated into a single source. Data virtualization
allows companies to treat data as a dynamic supply—providing processing capabilities
that can bring together data from multiple sources, easily accommodate new data
sources, and transform data according to user needs. Data virtualization tools sit in front
of multiple data sources and allows them to be treated as single source, delivering the
needed data—in the required form—at the right time to any application or user.

Desktop virtualization
Easily confused with operating system virtualization—which allows you to deploy
multiple operating systems on a single machine—desktop virtualization allows a central
administrator (or automated administration tool) to deploy simulated desktop
environments to hundreds of physical machines at once. Unlike traditional desktop
environments that are physically installed, configured, and updated on each machine,
desktop virtualization allows admins to perform mass configurations, updates, and
security checks on all virtual desktops.

Server virtualization

Servers are computers designed to process a high volume of specific tasks really well
so other computers—like laptops and desktops—can do a variety of other tasks.
Virtualizing a server lets it to do more of those specific functions and involves
partitioning it so that the components can be used to serve multiple functions.

Learn more about server virtualization

Operating system virtualization


Operating system virtualization happens at the kernel—the central task managers of
operating systems. It’s a useful way to run Linux and Windows environments side-by-
side. Enterprises can also push virtual operating systems to computers, which:

 Reduces bulk hardware costs, since the computers don’t require such high out-of-the-
box capabilities.
 Increases security, since all virtual instances can be monitored and isolated.
 Limits time spent on IT services like software updates.
Architecture of Virtualization
The architecture in Virtualization is defined as a model that describes
Virtualization conceptually. Virtualization application in Cloud Computing
is critical. In Cloud Computing, the end-users share the data on
applications termed as the clouds. However, end users can share the
entire IT infrastructure with Virtualization itself.

Here is the architecture of the Virtualization:

In the above image, Virtualization comprises virtual application and


infrastructure virtual services.
AD

The virtual application services help in application management, and the


virtual infrastructure services can help in infrastructure management.

Both services are embedded into a virtual data center or an operating


system. The virtual services can be used in any platforms and
programming environment. The services can be accessed through an on-
premise cloud or an off-premise cloud.

Virtualization services are delivered to cloud users by third-party


individuals. The cloud users, in return, have to pay third-party individuals
with an applicable monthly or annual fee.

This fee is paid to compensate the third parties to provide cloud services to
end-users, and they also provide different versions of applications as
requested by the end cloud users.

Virtualization is generally achieved through the hypervisor. A hypervisor


enables the separation of operating systems with the underlying
hardware. It enables the host machine to run many virtual machines
simultaneously and share the same physical computer resources. There
are two methods through which virtualization architecture is achieved
described below:

 Type one: The first hypervisor type is termed a bare-metal


hypervisor. They directly run over the top of the hardware of the host
system. They deliver effective resource management and ensure the
high availability of resources. It delivers direct access to the hardware
system, ensuring better scalability, performance, and stability.
 Type two: The second hypervisor type is the hosted hypervisor.
This is installed on the host operating system, and the virtual
operating system runs directly above the hypervisor. It is the kind of
system that eases and simplifies system configuration.

It additionally simplifies management tasks. The presence of the host


operating system at times limits the performance of the virtualization-
enabled system, and it even generates security flaws or risks.

https://www.guru99.com/virtualization-cloud-computing.html
ypes of virtualization architecture
There are two major types of virtualization architecture: hosted and bare-
metal. It's important to determine the type that will be used before
implementing virtualized systems.

Hosted architecture
In this type of setup, a host OS is first installed on the hardware, followed by
the software. The software, which is a hypervisor or virtual machine (VM)
monitor, is required to install multiple guest OS or VMs on the hardware in
order to set up the virtualization architecture. Once the hypervisor is in place,
applications can be installed and run on the VMs just like they are installed on
physical machines.

A hosted architecture is best suited for the following:

 software development

 running legacy applications

 supporting multiple OSes

 simplifying system configuration
Bare-metal architecture
In this architecture, a hypervisor is installed directly on the hardware instead
of on top of an OS. The installation for the hypervisor and VMs happens in the
same way as with hosted architecture. A bare-metal virtualization architecture
is suitable for applications that provide real-time access or perform some type
of data processing.

Benefits of bare-metal virtualization include the following:


 effective resource management

 high resource availability

 higher scalability

 better system performance and stability


Bare-metal virtualization, also known as Type 1 Hypervisor, is a form of
virtualization where the hypervisor runs directly on the physical hardware
without the need for an underlying operating system. In this architecture, the
hypervisor acts as a thin layer between the hardware and the virtual machines
(VMs). It has direct control over the hardware resources and manages the
allocation and sharing of these resources among the VMs.

Key features of bare-metal virtualization:

1. Performance: Since the hypervisor runs directly on the hardware, it can


provide better performance compared to hosted virtualization. VMs have
direct access to the underlying hardware, resulting in reduced overhead and
better utilization of system resources.

2. Isolation: Bare-metal virtualization offers strong isolation between VMs.


Each VM operates independently and is unaware of the other VMs running on
the same physical machine. This isolation enhances security and stability, as
any issues or vulnerabilities in one VM are contained within that VM and do
not affect others.

3. Hardware Support: Bare-metal hypervisors support a wide range of


hardware, allowing virtualization on various server architectures. They often
have advanced features like live migration, high availability, and hardware-
assisted virtualization, which enhance performance and flexibility.

Examples of bare-metal hypervisors include VMware ESXi, Microsoft Hyper-V


Server, Citrix XenServer, and KVM.
On the other hand, hosted virtualization, also known as Type 2 Hypervisor,
involves running the hypervisor on top of an existing operating system. In this
architecture, the host operating system provides the necessary device drivers
and hardware support, while the hypervisor runs as a software layer on top of
it. The virtual machines are created and managed within the host operating
system.

Key features of hosted virtualization:

1. Ease of Use: Hosted virtualization is often considered more user-friendly


and easier to set up compared to bare-metal virtualization. Since it leverages
an existing operating system, users can install and manage virtual machines
using familiar tools and interfaces.

2. Flexibility: Hosted virtualization allows users to run multiple guest operating


systems on top of the host operating system. This enables running different
operating systems and software configurations simultaneously, making it
suitable for desktop virtualization and development environments.

3. Resource Sharing: In hosted virtualization, the resources of the host


operating system are shared among the virtual machines. The host operating
system manages the allocation of CPU, memory, and disk resources to the
virtual machines.

Examples of hosted virtualization solutions include VMware Workstation,


Oracle VirtualBox, and Microsoft Virtual PC.
Both bare-metal and hosted virtualization have their advantages and use
cases. Bare-metal virtualization is commonly used in enterprise environments
and data centers where performance, scalability, and isolation are critical.
Hosted virtualization, on the other hand, is often used for desktop
virtualization, development and testing environments, and scenarios where
ease of use and flexibility are prioritized.
Virtual clustering, also known as virtual machine clustering or virtual server
clustering, is a technique that involves creating a group or cluster of virtual
machines (VMs) that work together as a single logical unit. These VMs are
typically hosted on a virtualization platform and are interconnected to provide
high availability, load balancing, and fault tolerance for applications and
services.

The main goal of virtual clustering is to ensure continuous availability and


reliability of applications running on the cluster. It achieves this by distributing
the workload across multiple VMs, allowing for load balancing and resource
optimization. In the event of a VM failure or hardware issue, the workload can
be automatically shifted to other VMs in the cluster, minimizing downtime and
maintaining service continuity.

Here are some key aspects and benefits of virtual clustering:

1. High Availability: Virtual clustering helps achieve high availability by


allowing multiple instances of an application or service to run on different VMs
simultaneously. If one VM fails, another VM in the cluster can take over the
workload seamlessly, ensuring uninterrupted operation.

2. Load Balancing: By distributing the workload across multiple VMs, virtual


clustering helps balance the resource utilization and prevents any single VM
from being overloaded. This improves performance and scalability of
applications.

3. Fault Tolerance: Virtual clustering enhances fault tolerance by providing


redundancy. If a VM or host fails, the applications and services can
automatically failover to other available VMs or hosts in the cluster, minimizing
the impact of failures on the overall system.

4. Scalability: Virtual clustering allows for easy scalability as additional VMs


can be added to the cluster to handle increased workloads. This flexibility
enables organizations to adapt to changing demands without significant
disruptions.

5. Simplified Management: Virtual clustering often comes with management


tools that provide centralized control and monitoring of the cluster. These tools
simplify the administration and configuration of the cluster, making it easier to
manage and maintain.

Virtual clustering can be implemented using various clustering technologies


and virtualization platforms, depending on the specific requirements and
virtualization infrastructure in use. Examples of virtual clustering technologies
include VMware vSphere High Availability (HA), Microsoft Failover Clustering,
and Linux-based clustering solutions like Pacemaker and Corosync.

Overall, virtual clustering helps organizations achieve higher availability, better


resource utilization, and improved scalability for their applications and
services by leveraging the benefits of virtualization and clustering
technologies.
Virtualization technology has transformed the way businesses operate their enterprise
environment. With the help of virtualization, companies can run multiple operating systems
and applications on a single physical server, thus reducing hardware costs while maximizing
computing power.

A Virtual Machine Cluster, also known as a VM Cluster, takes this technology one step further. It
is a group of several virtual machines hosted on multiple physical servers that are
interconnected and managed as a single entity. In simpler terms, a VM Cluster is a collection of
virtual machines that offer better performance, higher availability, and simplified management
than standalone virtual machines.

The primary purpose of a VM cluster is to provide uninterrupted access to mission-critical


applications and services. If one of the physical servers hosting a virtual machine fails, the VM
cluster distributes the workload to the other servers instantly, ensuring minimal downtime and
preventing data loss. This makes VM clusters highly resilient and fault-tolerant.

Another significant advantage of a VM cluster is its scalability. Organizations can quickly and
easily add or remove virtual machines as per their requirement without disrupting the existing
virtual machines. Since the resources are distributed across multiple physical servers, it is
easier to allocate computing resources as per the need of the application or service.

In a VM Cluster, each virtual machine is assigned its own set of resources and tasks. It has its
own operating system and applications, and the data resides within it. However, from the end-
users’ perspective, the virtual machines in the cluster appear as a single virtual environment.
This makes it easier for the IT team to manage the virtual machines as well as monitor and
troubleshoot issues that may arise.

The management of a VM cluster can be done through software tools that offer a centralized
management console. Such tools allow the IT team to monitor virtual machines’ status,
configure settings, and perform other administrative functions. In addition, VM clusters enable
businesses to create virtualized test environments that can be used to develop new
applications, prototypes, and conduct testing.

In conclusion, a virtual machine cluster is the next step in virtualization technology, offering a
more robust and redundant computing environment, high availability, scalability, and simplified
management. Organizations that run critical applications and services, multiple virtual
machines or need elasticity and the ability to handle peaks, will benefit from deploying a VM
Cluster. With minimal hardware requirements, it can be an affordable option for many
businesses to enhance their computing environment’s efficiency and reliability.
Apart from the virtualization architecture, there are various software solutions used in
virtualization:

1. VMware vSphere: A comprehensive suite of virtualization products, including ESXi


hypervisor, vCenter Server for management, and other tools for resource
allocation, high availability, and virtual networking.
2. Microsoft Hyper-V: Microsoft's virtualization platform that provides hypervisor-
based virtualization capabilities and management tools.
3. Citrix XenServer: An open-source hypervisor-based virtualization platform that
offers enterprise-level features, including high availability, live migration, and
centralized management.
4. KVM (Kernel-based Virtual Machine): A Linux kernel module that provides
hardware-assisted virtualization and serves as a foundation for various
virtualization solutions, such as Red Hat Virtualization and Proxmox VE.
5. Docker: A containerization platform that allows developers to package
applications and their dependencies into containers for easy deployment and
portability.
6. Kubernetes: An open-source container orchestration platform that automates the
deployment, scaling, and management of containerized applications.
7. OpenStack: An open-source cloud computing platform that enables the creation
and management of private and public clouds, utilizing various virtualization
technologies like KVM and Docker.

These are just a few examples of virtualization architectures and software solutions. The
choice of architecture and software depends on specific requirements, use cases, and
the desired level of isolation and management capabilities.
What is app virtualization?
App virtualization (application virtualization) is the separation of an installation of
an application from the client computer accessing it.

From the user's perspective, the application works just like it would if it lived on the
user's device. The user can move or resize the application window as well as carry out
keyboard and mouse operations. There might be subtle differences at times, but for
the most part, the user should have a seamless experience.

How application virtualization works


Although there are multiple ways to virtualize applications, IT teams often take a
server-based approach, delivering the applications without having to install them on
individual desktops. Instead, administrators implement remote applications on a
server in the organization's data center or with a hosting service and then deliver them
to the users' desktops.

To make this possible, IT must use an application virtualization product. Application


virtualization vendors and their products include Microsoft App-V; Citrix Virtual
Apps; Parallels Remote Application Server; and VMware ThinApp or App Volumes,
both of which are included with VMware Horizon. VMware also offers Horizon Apps
to further support app virtualization.

The application virtualization software essentially transmits the application as


individual pixels from the hosting server to the desktops using a remote display
protocol such as Microsoft's Remote Desktop Protocol, Citrix HDX, VMware Blast
Extreme or PC over IP. The user can then access and use the app as though it were
installed locally. Any user actions are transmitted back to the server, which carries
them out.

Benefits of app virtualization


App virtualization can be an effective way for organizations to implement and
maintain their desktop applications. One of the benefits of application virtualization is
that administrators only need to install an app once to a centralized server rather than
to multiple desktops. This also makes it simpler to update applications and roll out
patches.

In addition, administrators have an easier time controlling application access. For


example, if a user should no longer have access an application, the administrator can
deny access permissions to the application without uninstalling it from the user's
desktop.

App virtualization technology makes it possible to run applications that might conflict
with a user's desktop applications or with other virtualized applications.

Users can also access virtualized applications from thin clients, or non-Windows


computers. The applications are immediately available without requiring users to wait
for long install or load operations. If a computer is lost or stolen, sensitive application
data stays on the server and does not get compromised.

Drawbacks of app virtualization


Application virtualization does have its challenges, however. Not all applications are
suited to virtualization. Graphics-intensive applications, for example, can get bogged
down in the rendering process. In addition, users require a steady and reliable
connection to the server to use the applications.

The use of peripheral devices can get more complicated with app virtualization,
especially when it comes to printing. System monitoring products can also have
trouble with virtualized applications, making it difficult to troubleshoot and isolate
performance issues.

What about streaming applications?


With application streaming, the virtualized application runs on the end user's local
computer. When a user requests an application, the local computer downloads its
components on demand. Only certain parts of an application are required to launch the
app; the remainder download in the background as needed.

Once completely downloaded, a streamed application can function without a network


connection. Various models and degrees of isolation ensure streaming applications do
not interfere with other applications and can be cleanly removed when the user closes
the application.
While virtualization offers numerous benefits, it is important to be aware of potential pitfalls and
challenges that can arise. Here are some common pitfalls of virtualization:

1. Performance Overhead: Virtualization introduces a layer of abstraction between the physical hardware
and virtual machines, which can result in a slight performance overhead. Although this overhead has
significantly reduced with advancements in virtualization technologies, resource-intensive workloads or
improper resource allocation can impact performance.

2. Resource contention: In a virtualized environment, multiple virtual machines share physical resources
such as CPU, memory, and storage. If resources are not allocated and managed properly, resource
contention can occur. This can lead to performance degradation and impact the performance of virtual
machines running on the same host.

3. Complexity: Virtualization introduces additional complexity in managing the virtualization


infrastructure. Administrators need to have a good understanding of virtualization concepts,
configuration, and troubleshooting techniques. Complexity increases as the virtualized environment
grows in scale and includes advanced features such as clustering, live migration, and high availability.

4. Single Point of Failure: While virtualization can enhance overall system availability, it also introduces a
potential single point of failure—the hypervisor or virtualization layer. If the hypervisor fails, all the
virtual machines running on it may be affected. Implementing proper high availability and backup
strategies can help mitigate this risk.

5. Compatibility Issues: Although virtualization provides improved compatibility in many cases, there can
still be instances where certain applications or drivers may not work properly in a virtualized
environment. This can be due to software licensing restrictions, hardware dependencies, or specific
configurations that are not supported within the virtualization environment.

6. Overcommitting Resources: It is possible to overcommit resources, such as CPU and memory, in a


virtualized environment. While this can increase overall resource utilization, it should be done cautiously.
Overcommitting resources excessively can lead to performance degradation and negatively impact the
performance of virtual machines.

7. Licensing and Compliance: Virtualization can introduce complexities in software licensing and
compliance. Some software vendors have specific licensing requirements for virtualized environments,
and organizations need to ensure compliance with these licensing terms to avoid legal and financial
issues.
To mitigate these pitfalls, it is important to carefully plan and design virtualization deployments, properly
allocate resources, implement monitoring and management tools, and stay updated with best practices
and vendor guidelines. Regular performance monitoring and capacity planning can help optimize
resource usage and identify and resolve potential issues before they impact critical workloads.

Pitfalls
Mismatching Servers

This aspect is commonly overlooked especially by smaller companies that


don't invest sufficient funds in their IT infrastructure and prefer to build it from
several bits and pieces. This usually leads to the simultaneous virtualization of
servers that come with different chip technology (AMD and Intel). Frequently,
migration of virtual machines between them won't be possible and server
restarts will be the only solution. This is a major hindrance and actually means
losing the benefits of live migration and virtualization.

Creating Too many Virtual Machines per Server

One of the great things about virtual machines is that they can be easily
created and migrated from server to server according to needs. However, this
can also create problems sometimes because IT staff members may get
carried away and deploy more Virtual Machines than a server can handle.

This will actually lead to a loss of performance that can be quite difficult to
spot. A practical way to work around this is to have some policies in place
regarding VM limitations and to make sure that the employees adhere to
them.

Misplacing Applications

A virtualized infrastructure is more complex than a traditional one and with a


number of applications deployed, losing track of applications is a distinct
possibility. Within a physical server infrastructure keeping track of all the apps
and the machines running them isn’t a difficult task. However, once you add a
significant number of virtual machines to the equation, things can get messy
and App patching, software licensing and updating can turn into painfully long
processes.

1. Detection/Discovery
You can't manage what you can't see! IT departments are often unprepared for the
complexity associated with understanding what VMs (virtual machines) exist and which
are active or inactive. To overcome these challenges, discovery tools need to extend to
the virtual world by identifying Virtual Machine Disk Format (.vmdk) files and how many
exist within the environment. This will identify both active and inactive VM’s.

2. Correlation
Difficulty in understanding which VMs are on which hosts and identifying which business
critical functions are supported by each VM is a common and largely unforeseen
problem encountered by IT departments employing virtualization. Mapping guest to host
relationships and grouping the VM’s by criticality & application is a best practice when
implementing virtualization.

3. Configuration management
Ensuring VMs are configured properly is crucial in preventing performance bottlenecks
and security vulnerabilities. Complexities in VM provisioning and offline VM patching is
a frequent issue for IT departments. A Technical Controls configuration management
database (CMDB) is critical to understanding the configurations of VM’s especially
dormant ones. The CMDB will provide the current state of a VM even if it is dormant,
allowing a technician to update the configuration by auditing and making changes to the
template.

4. Additional security considerations


If a host is vulnerable, all associated guest VMs and the business applications on those
VMs are also at risk. This could lead to far more reaching impact than the same exploit
on a single physical server. Treat a Virtual Machine just like any other system and
enforce security policies and compliance. Also, use an application that dynamically
maps guest-to-host relationships and tracks guest VM’s as they move from host to host.

5. VM identity management issues


Virtualization introduces complexities that often lead to issues surrounding separation of
duties. Who manages these machines? Do application owners have visibility into
changes being made? Identify roles and criticality and put them through the same
processes you leverage for physical devices including change management, release
management and hardening guidelines.
Grid Computing

omkarchalke

 Read

 Discuss

Grid Computing can be defined as a network of computers working together to perform


a task that would rather be difficult for a single machine. All machines on that network
work under the same protocol to act as a virtual supercomputer. The task that they work
on may include analyzing huge datasets or simulating situations that require high
computing power. Computers on the network contribute resources like processing power
and storage capacity to the network. 
Grid Computing is a subset of distributed computing, where a virtual supercomputer
comprises machines on a network connected by some bus, mostly Ethernet or sometimes
the Internet. It can also be seen as a form of Parallel Computing where instead of many
CPU cores on a single machine, it contains multiple cores spread across various
locations. The concept of grid computing isn’t new, but it is not yet perfected as there are
no standard rules and protocols established and accepted by people. 
Working: 
A Grid computing network mainly consists of these three types of machines 
1. Control Node: A computer, usually a server or a group of servers which
administrates the whole network and keeps the account of the resources in the
network pool.
2. Provider: The computer contributes its resources to the network resource pool.
3. User: The computer that uses the resources on the network.
When a computer makes a request for resources to the control node, the control node
gives the user access to the resources available on the network. When it is not in use it
should ideally contribute its resources to the network. Hence a normal computer on the
node can swing in between being a user or a provider based on its needs. The nodes may
consist of machines with similar platforms using the same OS called homogeneous
networks, else machines with different platforms running on various different OSs called
heterogeneous networks. This is the distinguishing part of grid computing from other
distributed computing architectures. 
For controlling the network and its resources a software/networking protocol is used
generally known as Middleware. This is responsible for administrating the network and
the control nodes are merely its executors. As a grid computing system should use only
unused resources of a computer, it is the job of the control node that any provider is not
overloaded with tasks. 
Another job of the middleware is to authorize any process that is being executed on the
network. In a grid computing system, a provider gives permission to the user to run
anything on its computer, hence it is a huge security threat to the network. Hence a
middleware should ensure that there is no unwanted task being executed on the network. 
The meaning of the term Grid Computing has changed over the years, according to “The
Grid: Blueprint for a new computing infrastructure” by Ian Foster and Carl Kesselman
published in 1999, the idea was to consume computing power like electricity is consumed
from a power grid. This idea is similar to the current concept of cloud computing,
whereas now grid computing is viewed as a distributed collaborative network. Currently,
grid computing is being used in various institutions to solve a lot of mathematical,
analytical, and physics problems. 
Advantages of Grid Computing: 
1. It is not centralized, as there are no servers required, except the control node
which is just used for controlling and not for processing.
2. Multiple heterogeneous machines i.e. machines with different Operating
Systems can use a single grid computing network.
3. Tasks can be performed parallelly across various physical locations and the
users don’t have to pay for them (with money).
Disadvantages of Grid Computing:
1. The software of the grid is still in the involution stage.
2. A super-fast interconnect between computer resources is the need of the hour.
3. Licensing across many servers may make it prohibitive for some applications.
4. Many groups are reluctant with sharing resources.
5. Trouble in the control node can come to halt in the whole network.
Virtualization can play a significant role in grid computing environments, enhancing
flexibility, resource utilization, and manageability. Grid computing involves the
coordination and sharing of computing resources across multiple geographically
distributed systems to solve complex computational problems or handle large-scale
data processing. Here's how virtualization can be applied in grid computing:

1. Resource Pooling: Virtualization allows for the creation of a shared pool of virtual
machines (VMs) across different physical hosts in the grid. This pooling of resources
enables efficient utilization of computing power and storage capacity, as VMs can be
dynamically allocated to tasks based on demand.

2. Scalability: Virtualization provides scalability in grid environments by enabling the


dynamic provisioning and deployment of VMs. As workload demands fluctuate,
additional VMs can be provisioned to handle increased tasks, and VMs can be
deprovisioned when their processing power is no longer required. This elasticity allows
for efficient resource allocation and cost optimization.

3. Isolation and Security: Virtualization provides isolation between VMs, ensuring that
applications and processes running within each VM do not interfere with each other.
This isolation enhances security by containing any potential vulnerabilities or attacks
within individual VMs, reducing the risk of compromising the entire grid.

4. Migration and Load Balancing: Virtual machine migration enables the live movement
of VMs between physical hosts within the grid without interrupting running applications.
This capability can be leveraged for load balancing, as VMs can be dynamically
migrated to balance the workload across the grid, optimizing resource utilization and
improving overall performance.

5. Fault Tolerance and High Availability: By using virtualization technologies like live
migration and clustering, grid environments can achieve fault tolerance and high
availability. In the event of a physical host failure or maintenance, VMs can be
automatically migrated to other hosts, ensuring minimal downtime and uninterrupted
grid operations.

6. Virtual Appliance Deployment: Grid computing can benefit from the concept of virtual
appliances, which are pre-configured VMs with specific software stacks or applications.
These virtual appliances can be easily deployed within the grid, simplifying the setup
and configuration process for different tasks or services.

Overall, virtualization in grid computing provides greater flexibility, scalability, resource


optimization, and fault tolerance. It enables efficient utilization of computing resources,
simplifies management, and enhances the overall performance and reliability of grid-
based systems.
What is Virtualization?
Virtualization is technology that you can use to create virtual representations of servers, storage,
networks, and other physical machines. Virtual software mimics the functions of physical hardware
to run multiple virtual machines simultaneously on a single physical machine. Businesses use
virtualization to use their hardware resources efficiently and get greater returns from their
investment. It also powers cloud computing services that help organizations manage infrastructure
more efficiently.

Why is virtualization important?


By using virtualization, you can interact with any hardware resource with greater flexibility. Physical
servers consume electricity, take up storage space, and need maintenance. You are often limited by
physical proximity and network design if you want to access them. Virtualization removes all these
limitations by abstracting physical hardware functionality into software. You can manage, maintain,
and use your hardware infrastructure like an application on the web.

Virtualization example
Consider a company that needs servers for three functions:

1. Store business email securely


2. Run a customer-facing application
3. Run internal business applications

Each of these functions has different configuration requirements: 

 The email application requires more storage capacity and a Windows operating system.
 The customer-facing application requires a Linux operating system and high processing
power to handle large volumes of website traffic.
 The internal business application requires iOS and more internal memory (RAM).

To meet these requirements, the company sets up three different dedicated physical servers for
each application. The company must make a high initial investment and perform ongoing
maintenance and upgrades for one machine at a time. The company also cannot optimize its
computing capacity. It pays 100% of the servers’ maintenance costs but uses only a fraction of their
storage and processing capacities.

Efficient hardware use

With virtualization, the company creates three digital servers, or virtual machines, on a single
physical server. It specifies the operating system requirements for the virtual machines and can use
them like the physical servers. However, the company now has less hardware and fewer related
expenses. 

Infrastructure as a service

The company can go one step further and use a cloud instance or virtual machine from a cloud
computing provider such as AWS. AWS manages all the underlying hardware, and the company can
request server resources with varying configurations. All the applications run on these virtual servers
without the users noticing any difference. Server management also becomes easier for the
company’s IT team.

What is virtualization?
To properly understand Kernel-based Virtual Machine (KVM), you first need to understand some
basic concepts in virtualization. Virtualization is a process that allows a computer to share its
hardware resources with multiple digitally separated environments. Each virtualized environment
runs within its allocated resources, such as memory, processing power, and storage. With
virtualization, organizations can switch between different operating systems on the same server
without rebooting. 

Virtual machines and hypervisors are two important concepts in virtualization.

Virtual machine
A virtual machine is a software-defined computer that runs on a physical computer with a separate
operating system and computing resources. The physical computer is called the host machine and
virtual machines are guest machines. Multiple virtual machines can run on a single physical
machine. Virtual machines are abstracted from the computer hardware by a hypervisor.

Hypervisor
The hypervisor is a software component that manages multiple virtual machines in a computer. It
ensures that each virtual machine gets the allocated resources and does not interfere with the
operation of other virtual machines. There are two types of hypervisors.

Type 1 hypervisor

A type 1 hypervisor, or bare-metal hypervisor, is a hypervisor program installed directly on the


computer’s hardware instead of the operating system. Therefore, type 1 hypervisors have better
performance and are commonly used by enterprise applications. KVM uses the type 1 hypervisor to
host multiple virtual machines on the Linux operating system.

Type 2 hypervisor

Also known as a hosted hypervisor, the type 2 hypervisor is installed on an operating system. Type 2
hypervisors are suitable for end-user computing.

What are the benefits of virtualization?


Virtualization provides several benefits to any organization:

Efficient resource use


Virtualization improves hardware resources used in your data center. For example, instead of
running one server on one computer system, you can create a virtual server pool on the same
computer system by using and returning servers to the pool as required. Having fewer underlying
physical servers frees up space in your data center and saves money on electricity, generators, and
cooling appliances. 
Automated IT management
Now that physical computers are virtual, you can manage them by using software tools.
Administrators create deployment and configuration programs to define virtual machine templates.
You can duplicate your infrastructure repeatedly and consistently and avoid error-prone manual
configurations.

Faster disaster recovery


When events such as natural disasters or cyberattacks negatively affect business operations,
regaining access to IT infrastructure and replacing or fixing a physical server can take hours or even
days. By contrast, the process takes minutes with virtualized environments. This prompt response
significantly improves resiliency and facilitates business continuity so that operations can continue
as scheduled.  

https://www.tutorialspoint.com/cloud_computing/cloud_computing_virtualization.htm
Virtualization is a fundamental technology that underpins cloud computing. In fact,
virtualization is a key enabler of the cloud computing model, providing the foundation for
its scalability, resource pooling, and multi-tenancy capabilities. Here's how virtualization
is used in cloud computing:

1. Infrastructure as a Service (IaaS): In IaaS, virtualization allows for the creation of


virtual machines (VMs) that can be provisioned on-demand by users. Virtualization
abstracts the underlying hardware resources, enabling users to deploy and manage
VMs with their desired operating systems and applications. This flexibility allows for
efficient resource allocation, scalability, and self-service provisioning in the cloud.

2. Platform as a Service (PaaS): PaaS providers often utilize virtualization to abstract


the underlying infrastructure and provide a platform for developing, testing, and
deploying applications. Virtualization allows for the isolation of application environments
and provides a standardized platform for developers to build and deploy their
applications without worrying about the underlying infrastructure.

3. Software as a Service (SaaS): SaaS providers may utilize virtualization to host and
deliver their applications to end-users. Virtualization allows for the consolidation of
multiple instances of the application on a shared infrastructure, providing efficient
resource utilization and multi-tenancy capabilities.

4. Resource Pooling and Elasticity: Virtualization enables resource pooling in the cloud,
where physical resources such as compute, storage, and networking are abstracted and
shared among multiple VMs or containers. This pooling allows for dynamic allocation of
resources based on demand, enabling scalability and elasticity to handle fluctuating
workloads.

5. Multi-tenancy: Virtualization enables the secure isolation of resources and data


between different users or tenants in a multi-tenant cloud environment. Each user's VMs
or containers run in their own isolated virtual environment, ensuring privacy and security
while sharing the underlying physical infrastructure.
6. Live Migration and High Availability: Virtualization technologies like live migration
enable the movement of VMs between physical hosts without service disruption. This
capability allows for load balancing, maintenance, and fault tolerance, ensuring high
availability of services in the cloud.

7. Resource Optimization: Virtualization helps optimize resource utilization in the cloud


by allowing for better consolidation of workloads and the ability to dynamically adjust
resource allocation based on demand. This results in improved efficiency and cost-
effectiveness.

Overall, virtualization is a critical component of cloud computing, enabling the efficient


utilization of resources, scalability, multi-tenancy, and flexibility in deploying and
managing cloud-based services.

https://www.geeksforgeeks.org/virtual-machine-security-in-cloud/

What is virtualized security?


Virtualized security, or security virtualization, refers to security solutions that are software-
based and designed to work within a virtualized IT environment. This differs from traditional,
hardware-based network security, which is static and runs on devices such as traditional
firewalls, routers, and switches. 

In contrast to hardware-based security, virtualized security is flexible and dynamic. Instead of


being tied to a device, it can be deployed anywhere in the network and is often cloud-based. This
is key for virtualized networks, in which operators spin up workloads and applications
dynamically; virtualized security allows security services and functions to move around with
those dynamically created workloads. 

Cloud security considerations (such as isolating multitenant environments in public cloud


environments) are also important to virtualized security. The flexibility of virtualized security is
helpful for securing hybrid and multi-cloud environments, where data and workloads migrate
around a complicated ecosystem involving multiple vendors.

Protect Your Data Center with a Purpose-Built Internal Firewall

DOWNLOAD NOW
VMware NSX Data Center Datasheet

DOWNLOAD NOW

What are the benefits of virtualized security?


Virtualized security is now effectively necessary to keep up with the complex security demands
of a virtualized network, plus it’s more flexible and efficient than traditional physical security.
Here are some of its specific benefits:
 Cost-effectiveness: Virtualized security allows an enterprise to maintain a
secure network without a large increase in spending on expensive
proprietary hardware. Pricing for cloud-based virtualized security services
is often determined by usage, which can mean additional savings for
organizations that use resources efficiently.
 Flexibility: Virtualized security functions can follow workloads anywhere,
which is crucial in a virtualized environment. It provides protection across
multiple data centers and in multi-cloud and hybrid cloud environments,
allowing an organization to take advantage of the full benefits of
virtualization while also keeping data secure.
 Operational efficiency:Quicker and easier to deploy than hardware-based
security, virtualized security doesn’t require IT teams to set up and
configure multiple hardware appliances. Instead, they can set up security
systems through centralized software, enabling rapid scaling. Using
software to run security technology also allows security tasks to be
automated, freeing up additional time for IT teams.
 Regulatory compliance:Traditional hardware-based security is static and
unable to keep up with the demands of a virtualized network, making
virtualized security a necessity for organizations that need to maintain
regulatory compliance.
How does virtualized security work?
Virtualized security can take the functions of traditional security hardware appliances (such as
firewalls and antivirus protection) and deploy them via software. In addition, virtualized security
can also perform additional security functions. These functions are only possible due to
the advantages of virtualization, and are designed to address the specific security needs of a
virtualized environment. 

For example, an enterprise can insert security controls (such as encryption) between the
application layer and the underlying infrastructure, or use strategies such as micro-segmentation
to reduce the potential attack surface. 

Virtualized security can be implemented as an application directly on a bare metal hypervisor (a


position it can leverage to provide effective application monitoring) or as a hosted service on a
virtual machine. In either case, it can be quickly deployed where it is most effective, unlike
physical security, which is tied to a specific device. 

What are the risks of virtualized security?


The increased complexity of virtualized security can be a challenge for IT, which in turn leads to
increased risk. It’s harder to keep track of workloads and applications in a virtualized
environment as they migrate across servers, which makes it more difficult to monitor security
policies and configurations. And the ease of spinning up virtual machines can also contribute to
security holes. 
It’s important to note, however, that many of these risks are already present in a virtualized
environment, whether security services are virtualized or not. Following enterprise security best
practices (such as spinning down virtual machines when they are no longer needed and using
automation to keep security policies up to date) can help mitigate such risks.
How is physical security different from
virtualized security?
Traditional physical security is hardware-based, and as a result, it’s inflexible and static. The
traditional approach depends on devices deployed at strategic points across a network and is
often focused on protecting the network perimeter (as with a traditional firewall). However, the
perimeter of a virtualized, cloud-based network is necessarily porous and workloads and
applications are dynamically created, increasing the potential attack surface. 
Traditional security also relies heavily upon port and protocol filtering, an approach that’s
ineffective in a virtualized environment where addresses and ports are assigned dynamically. In
such an environment, traditional hardware-based security is not enough; a cloud-based network
requires virtualized security that can move around the network along with workloads and
applications.

What are the different types of virtualized


security?
There are many features and types of virtualized security, encompassing network
security, application security, and cloud security. Some virtualized security technologies are
essentially updated, virtualized versions of traditional security technology (such as next-
generation firewalls). Others are innovative new technologies that are built into the very fabric of
the virtualized network. 

Some common types of virtualized security features include:

 Segmentation, or making specific resources available only to specific


applications and users. This typically takes the form of controlling traffic
between different network segments or tiers.
 Micro-segmentation, or applying specific security policies at the workload
level to create granular secure zones and limit an attacker’s ability to move
through the network. Micro-segmentation divides a data center into
segments and allows IT teams to define security controls for each segment
individually, bolstering the data center’s resistance to attack.
 Isolation, or separating independent workloads and applications on the
same network. This is particularly important in a multitenant public
cloud environment, and can also be used to isolate virtual networks from
the underlying physical infrastructure, protecting the infrastructure from
attack.
Anatomy of Cloud Computing

AuthorSAURABH ANAND

3 upvotes

Share

Introduction
The anatomy of cloud computing can be defined as the structure of the cloud. The anatomy of
cloud computing cannot be considered the same as cloud architecture. It may not include any
dependency on which the technology works, whereas architecture defines and describes the
technology over which it is working. Thus, the anatomy of cloud computing can be considered a
part of the architecture of the cloud.
 
Cloud storage architectures include a front end that exposes an API for accessing storage. This
API represents the Small Computer Systems Interface(SCSI) protocol in traditional storage
systems; however, these protocols are changing in the cloud. This could be an internal protocol
for implementing specific features or a standard back end for physical discs.
 
The storage logic is a layer of middleware that sits behind the frontend. Over typical data-
placement techniques, this layer incorporates a range of characteristics, such as replication and
data reduction, over the traditional data-placement algorithms. Finally, the backend implements
data storage on a physical level. 
Components of cloud anatomy

 
Components of cloud anatomy
 
Application
The uppermost layer is the application layer. In this layer, any application can be executed.
Platform
This component comprises platforms that are in charge of the application's execution. This
platform bridges the gap between infrastructure and application.
Virtualised Infrastructure
The infrastructure is made up of resources that the other components operate on. This allows the
user to perform computations.
Visualization
Virtualization is the process of overlaying logical resource components on top of physical
resources. The infrastructure is made up of discrete and autonomous logical components.
Server/Storage/Datacentre
This is the physical component of the cloud provided by servers and storage units.

Now we will discuss layers of the anatomy of cloud computing. Some of them are discussed
below.

Layers of the anatomy of cloud computing


Several layers are responsible for carrying out cloud processes smoothly. Some of them are
discussed below.
Service Catalog
The service catalog is critical to the definition of the cloud since it specifies the types of services
that the cloud can provide and what they cost to the end-user. Architecture is the first thing that
is drafted before a cloud. Before processing each request for a new resource, the service
management layer consults the service catalog.
Cloud Life cycle Management Layer (CLM)
The CLM layer handles the coordination of all other layers in the cloud. All internal and external
queries are directed to the CLM layer initially. Internally, CLM may send requests and actions to
different layers for processing.
Provisioning and Configuration Module
It's the lowest cloud level, usually found on bare hardware (as firmware) or on top of the
hypervisor layer. Its purpose is to hide the underlying hardware and provide a standard
mechanism for spawning virtual machine instances on demand. It also manages the virtual
machine's operating systems and applications post-configuration.
Monitoring and Optimization
This layer is in charge of cloud monitoring for all services, storage, networking, and application
components. It could conduct routine activities based on the statistics to optimize the behavior of
the infrastructure components and offer essential data to the cloud administrator so that the setup
could be further optimized for optimal usage and performance.
Metering and Chargeback
This layer contains utilities for calculating cloud resource utilization. The metering module
gathers all use data per domain and use. This module provides the cloud administrator with
adequate information to regularly track ongoing resource usage and generate bills based on that
usage.
Orchestration
Cloud operations rely heavily on orchestration. Requests from the service management layer,
monitoring, and chargeback modules are converted to appropriate action items, then forwarded
to the provisioning and configuration module for final closure. In the process of orchestration,
the CMDB is updated.
Configuration Management Database (CMDB)
It is a central configuration repository that stores and updates all metadata and configuration for
various modules and resources in real-time. Third-party software and integration components can
then access the repository using standard protocols like SOAP. As requests are processed in the
cloud, all updates in the CMDB happen in real-time.

Cloud computing is a model for delivering on-demand computing resources over the
internet. It allows users to access a shared pool of computing resources, such as
servers, storage, databases, and applications, without the need for local infrastructure
or maintenance. The anatomy of cloud computing can be understood by exploring its
key components:

1. **Clients**: Clients are the end-user devices or software applications that interact with
the cloud. They can be desktop computers, laptops, smartphones, or other connected
devices. Clients communicate with the cloud infrastructure through various protocols
and interfaces.

2. **Infrastructure**: The cloud infrastructure comprises the physical resources that


provide the foundation for cloud computing. It includes servers, storage devices,
networking equipment, and data centers. Infrastructure resources are owned and
managed by cloud service providers (CSPs) and are made available to users on a pay-
as-you-go basis.

3. **Virtualization**: Virtualization enables the creation of virtual instances of computing


resources, such as virtual servers, storage, and networks. It allows multiple users to
share the same physical infrastructure while maintaining isolation and security.
Virtualization technology enables efficient resource allocation and scalability in the
cloud.
4. **Services**: Cloud services refer to the various capabilities and functionalities
offered by cloud providers. The three primary service models in cloud computing are:

a. **Infrastructure as a Service (IaaS)**: IaaS provides virtualized infrastructure


resources, such as virtual machines (VMs), storage, and networks. Users have control
over the operating systems, applications, and configurations, while the cloud provider
manages the underlying infrastructure.

b. **Platform as a Service (PaaS)**: PaaS offers a complete development and


deployment platform for building and running applications. It includes an operating
system, development tools, programming languages, and runtime environments. Users
focus on application development, while the cloud provider manages the underlying
infrastructure and operating system.

c. **Software as a Service (SaaS)**: SaaS provides complete software applications


that are accessible over the internet. Users can access these applications through web
browsers or specialized client software. The cloud provider manages all aspects of the
infrastructure, including hardware, software, and data.

5. **Networking**: Networking plays a crucial role in cloud computing by connecting


clients to the cloud infrastructure. It enables data transfer, communication between
components, and access to cloud services. Networking in the cloud may involve virtual
private networks (VPNs), load balancers, firewalls, and content delivery networks
(CDNs) to optimize performance, security, and availability.

6. **Security**: Cloud security focuses on protecting data, applications, and


infrastructure in the cloud environment. It includes measures such as data encryption,
access controls, identity management, and threat detection and prevention. Cloud
service providers implement security measures to ensure the confidentiality, integrity,
and availability of user data.

7. **Management and Monitoring**: Cloud management involves tasks such as


provisioning and deprovisioning resources, scaling resources based on demand,
monitoring performance and availability, and optimizing resource utilization. Cloud
providers offer management tools and interfaces to allow users to control and monitor
their cloud resources effectively.

By understanding these components, one can grasp the anatomy of cloud computing
and how its various elements work together to provide flexible, scalable, and on-
demand computing capabilities.

https://www.niallkennedy.com/blog/2009/03/cloud-computing-stack.html

What is virtual infrastructure?

Virtual infrastructure is a collection of software-defined components that make up an enterprise


IT environment. A virtual infrastructure provides the same IT capabilities as physical resources,
but with software, so that IT teams can allocate these virtual resources quickly and across
multiple systems, based on the varying needs of the enterprise.

By decoupling physical hardware from an operating system, a virtual infrastructure can help
organizations achieve greater IT resource utilization, flexibility, scalability and cost savings.
These benefits are especially helpful to small businesses that require reliable infrastructure but
can’t afford to invest in costly physical hardware.

Next-Gen Virtualization for Dummies Guide

DOWNLOAD NOW
Don’t Lose Out on the Power of Virtualization

READ MORE

Benefits of virtual infrastructure


The benefits of virtualization touch every aspect of an IT infrastructure, from storage and server
systems to networking tools. Here are some key benefits of a virtual infrastructure:

 Cost savings: By consolidating servers, virtualization reduces capital and


operating costs associated with variables such as electrical power, physical
security, hosting and server development.
 Scalability: A virtual infrastructure allows organizations to react quickly to
changing customer demands and market trends by ramping up on CPU
utilization or scaling back accordingly.
 Increased productivity: Faster provisioning of applications and resources
allows IT teams to respond more quickly to employee demands for new
tools and technologies. The result: increased productivity, efficiency and
agility for IT teams, and an enhanced employee experience and increased
talent retention rates without hardware procurement delays.
 Simplified server management: From seasonal spikes in consumer
demand to unexpected economic downturns, organizations need to respond
quickly. Simplified server management makes sure IT teams can spin up, or
down, virtual machines when required and re-provision resources based on
real-time needs. Furthermore, many management consoles offer
dashboards, automated alerts and reports so that IT teams can respond
immediately to server performance issues.
Virtual infrastructure components
By separating physical hardware from operating systems, virtualization can provision compute,
memory, storage and networking resources across multiple virtual machines (VMs) for greater
application performance, increased cost savings and easier management. Despite variances in
design and functionality, a virtual infrastructure typically consists of these key components:
 Virtualized compute: This component offers the same capabilities as
physical servers, but with the ability to be more efficient. Through
virtualization, many operating systems and applications can run on a single
physical server, whereas in traditional infrastructure servers were often
underutilized. Virtual compute also makes newer technologies like cloud
computing and containers possible.
 Virtualized storage: This component frees organizations from the
constraints and limitations of hardware by combining pools of physical
storage capacity into a single, more manageable repository. By connecting
storage arrays to multiple servers using storage area networks,
organizations can bolster their storage resources and gain more flexibility
in provisioning them to virtual machines. Widely used storage solutions
include fiber channel SAN arrays, iSCSI SAN arrays, and NAS arrays.
 Virtualized networking and security: This component decouples
networking services from the underlying hardware and allows users to
access network resources from a centralized management system. Key
security features ensure a protected environment for virtual machines,
including restricted access, virtual machine isolation and user provisioning
measures.
 Management solution: This component provides a user-friendly console
for configuring, managing and provisioning virtualized IT infrastructure, as
well automating processes. A management solution allows IT teams to
migrate virtual machines from one physical server to another without
delays or downtime, while enabling high availability for applications
running in virtual machines, disaster recovery and back-up administration.
Virtual infrastructure requirements
From design to disaster recovery, there are certain virtual infrastructure requirements
organizations must meet to reap long-term value from their investment.
 Plan ahead: When designing a virtual infrastructure, IT teams should
consider how business growth, market fluctuations and advancements in
technology might impact their hardware requirements and reliance on
compute, networking and storage resources.
 Look for ways to cut costs: IT infrastructure costs can become unwieldly
if IT teams don’t take the time to continuously examine a virtual
infrastructure and its deliverables. Cost-cutting initiatives may range from
replacing old servers and renegotiating vendor agreements to automating
time-consuming server management tasks.
 Prepare for failure: Despite its failover hardware and high availability,
even the most resilient virtual infrastructure can experience downtime. IT
teams should prepare for worst-case scenarios by taking advantage of
monitoring tools, purchasing extra hardware and relying on clusters to
better manage host resources.
Virtual infrastructure architecture
A virtual infrastructure architecture can help organizations transform and manage their IT system
infrastructure through virtualization. But it requires the right building blocks to deliver results.
These include:

 Host: A virtualization layer that manages resources and other services for
virtual machines. Virtual machines run on these individual hosts, which
continuously perform monitoring and management activities in the
background. Multiple hosts can be grouped together to work on the same
network and storage subsystems, culminating in combined computing and
memory resources to form a cluster. Machines can be dynamically added or
removed from a cluster.
 Hypervisor: A software layer that enables one host computer to
simultaneously support multiple virtual operating systems, also known as
virtual machines. By sharing the same physical computing resources, such
as memory, processing and storage, the hypervisor stretches available
resources and improves IT flexibility.
 Virtual machine: These software-defined computers encompass operating
systems, software programs and documents. Managed by a virtual
infrastructure, each virtual machine has its own operating system called a
guest operating system.
The key advantage of virtual machines is that IT teams can provision them
faster and more easily than physical machines without the need for
hardware procurement. Better yet, IT teams can easily deploy and suspend
a virtual machine, and control access privileges, for greater security. These
privileges are based on policies set by a system administrator.
 User interface: This front-end element means administrators can view and
manage virtual infrastructure components by connecting directly to the
server host or through a browser-based interface.

CPU virtualization in cloud computing is a technology that


allows multiple virtual machines to run on a single physical
server. It is a key component of cloud computing, allowing for
the efficient use of computing resources and the ability to
quickly scale up or down as needed. CPU virtualization allows
for the creation of multiple virtual machines on a single physical
server, each with its own operating system and applications.
This allows for the efficient use of computing resources and the
ability to quickly scale up or down as needed.

What Is CPU Virtualization?


CPU virtualization is a technology that allows multiple virtual machines to run
on a single physical server. It is a key component of cloud computing,
allowing for the efficient use of computing resources and the ability to quickly
scale up or down as needed. CPU virtualization allows for the creation of
multiple virtual machines on a single physical server, each with its own
operating system and applications. This allows for the efficient use of
computing resources and the ability to quickly scale up or down as needed.

Benefits of CPU Virtualization


CPU virtualization offers a number of benefits for cloud computing. It allows
for the efficient use of computing resources, as multiple virtual machines can
be run on a single physical server. This allows for the efficient use of
computing resources and the ability to quickly scale up or down as needed.
Additionally, CPU virtualization allows for the creation of multiple virtual
machines on a single physical server, each with its own operating system and
applications. This allows for the efficient use of computing resources and the
ability to quickly scale up or down as needed.
How CPU Virtualization Works
CPU virtualization works by allowing multiple virtual machines to run on a
single physical server. This is done by using a hypervisor, which is a software
layer that sits between the physical server and the virtual machines. The
hypervisor is responsible for managing the resources of the physical server
and allocating them to the virtual machines. This allows for the efficient use
of computing resources and the ability to quickly scale up or down as needed.

Advantages of CPU Virtualization


CPU virtualization offers a number of advantages for cloud computing. It
allows for the efficient use of computing resources, as multiple virtual
machines can be run on a single physical server. This allows for the efficient
use of computing resources and the ability to quickly scale up or down as
needed. Additionally, CPU virtualization allows for the creation of multiple
virtual machines on a single physical server, each with its own operating
system and applications. This allows for the efficient use of computing
resources and the ability to quickly scale up or down as needed.

Disadvantages of CPU Virtualization


CPU virtualization also has some disadvantages. It can be difficult to manage
multiple virtual machines on a single physical server, as the hypervisor must
be configured correctly in order for the virtual machines to run properly.
Additionally, CPU virtualization can be resource-intensive, as the hypervisor
must manage the resources of the physical server and allocate them to the
virtual machines. This can lead to increased costs for cloud computing.
Introduction to CPU Virtualization
CPU Virtualization is one of the cloud-computing technology that

requires a single CPU to work, which acts as multiple machines working


together. Virtualization got its existence since the 1960s that became

popular with hardware virtualization or CPU virtualization. To work

efficiently and utilize all the computing resources to work together, CPU

virtualization was invented to manage things by running every OS in one

machine easily. Virtualization mainly focuses on efficiency and

performance-related operations by saving time. When needed, the

hardware resources are used, and the underlying layer process

instructions to make virtual machines work.

What is CPU Virtualization?


CPU Virtualization emphasizes running programs and instructions

through a virtual machine, giving the feeling of working on a physical

workstation. All the operations are handled by an emulator that controls

software to run according to it. Nevertheless, CPU Virtualization does not

act as an emulator. The emulator performs the same way as a normal

computer machine does. It replicates the same copy or data and

generates the same output just like a physical machine does. The
emulation function offers great portability and facilitates working on a

single platform, acting like working on multiple platforms.

With CPU Virtualization, all the virtual machines act as physical machines

and distribute their hosting resources like having various virtual

processors. Sharing of physical resources takes place to each virtual

machine when all hosting services get the request. Finally, the virtual

machines get a share of the single CPU allocated to them, being a single-

processor acting as a dual-processor.

Types of CPU Virtualization


The various types of CPU virtualization available are as follows

C PROGRAMMING Certification Course


36+ Hours of HD Videos | 8 Courses | Verifiable Certificate of Completion | Lifetime Access
4.5
1. Software-Based CPU Virtualization

This CPU Virtualization is software-based where with the help of it,

application code gets executed on the processor and the privileged code

gets translated first, and that translated code gets executed directly on

the processor. This translation is purely known as Binary Translation (BT).

The code that gets translated is very large in size and also slow at the

same time on execution. The guest programs that are based on

privileged coding runs very smooth and fast. The code programs or the

applications that are based on privileged code components that are

significant such as system calls, run at a slower rate in the virtual

environment.

2. Hardware-Assisted CPU Virtualization


Ai ARTIFICIAL INTELLIGENCE Course
65+ Hours of HD Videos | 7 Courses | 3 Mock Tests & Quizzes | Verifiable Certificate of Completion |
Lifetime Access
4.5

There is hardware that gets assistance to support CPU Virtualization from

certain processors. Here, the guest user uses a different version of code

and mode of execution known as a guest mode. The guest code mainly

runs on guest mode. The best part in hardware-assisted CPU

Virtualization is that there is no requirement for translation while using it

for hardware assistance. For this, the system calls runs faster than

expected. Workloads that require the updation of page tables get a

chance of exiting from guest mode to root mode that eventually slows

down the program’s performance and efficiency.

3. Virtualization and Processor-Specific Behavior

Despite having specific software behavior of the CPU model, the virtual

machine still helps in detecting the processor model on which the system

runs. The processor model is different based on the CPU and the wide
variety of features it offers, whereas the applications that produce the

output generally utilize such features. In such cases, vMotion cannot be

used to migrate the virtual machines that are running on feature-rich

processors. Enhanced vMotion Compatibility easily handles this feature.

4. Performance Implications of CPU Virtualization

JavaScript Certification Course


343+ Hours of HD Videos | 83 Courses | 18 Mock Tests & Quizzes | Verifiable Certificate of Completion |
Lifetime Access
4.5

CPU Virtualization adds the amount of overhead based on the workloads

and virtualization used. Any application depends mainly on the CPU

power waiting for the instructions to get executed first. Such applications

require the use of CPU Virtualization that gets the command or

executions that are needed to be executed first. This overhead takes the

overall processing time and results in an overall degradation in

performance and CPU virtualisation execution.

Why CPU Virtualization is Important?


CPU Virtualization is important in lots of ways, and its usefulness has

been widespread in the cloud computing industry. I will brief regarding

the advantages of using CPU Virtualization, stated as below:

 Using CPU Virtualization, the overall performance and efficiency are

improved to a great extent because it usually takes virtual machines to

work on a single CPU, sharing resources acting like using multiple

processors at the same time. This saves cost and money.

 As CPU Virtualization uses virtual machines to work on separate

operating systems on a single sharing system, security is also maintained

by it. The machines are also kept separate from each other. Because of

that, any cyber-attack or software glitch is unable to damage the system,

as a single machine cannot affect another machine.

 It purely works on virtual machines and hardware resources. It consists of

a single server where all the computing resources are stored, and

processing is done based on the CPU’s instructions that are shared among

all the systems involved. Since the hardware requirement is less and the

physical machine usage is absent, that is why the cost is very less, and

timing is saved.
 It provides the best backup of computing resources since the data is stored

and shared from a single system. It provides reliability to users dependent

on a single system and provides greater retrieval options of data for the

user to make them happy.

 It also offers great and fast deployment procedure options so that it

reaches the client without any hassle, and also it maintains the atomicity.

Virtualization ensures the desired data reach the desired clients through

the medium and checks any constraints are there, and are also fast to

remove it.

Storage Virtualization
As we know that, there has been a strong link between the physical host and the locally
installed storage devices. However, that paradigm has been changing drastically, almost
local storage is no longer needed. As the technology progressing, more advanced
storage devices are coming to the market that provide more functionality, and obsolete
the local storage.

Storage virtualization is a major component for storage servers, in the form of functional
RAID levels and controllers. Operating systems and applications with device can access
the disks directly by themselves for writing. The controllers configure the local storage in
RAID groups and present the storage to the operating system depending upon the
configuration. However, the storage is abstracted and the controller is determining how
to write the data or retrieve the requested data for the operating system.

Storage virtualization is becoming more and more important in various other forms:

File servers: The operating system writes the data to a remote location with no need to
understand how to write to the physical media.

WAN Accelerators: Instead of sending multiple copies of the same data over the WAN
environment, WAN accelerators will cache the data locally and present the re-requested
blocks at LAN speed, while not impacting the WAN performance.

SAN and NAS: Storage is presented over the Ethernet network of the operating system.
NAS presents the storage as file operations (like NFS). SAN technologies present the
storage as block level storage (like Fibre Channel). SAN technologies receive the
operating instructions only when if the storage was a locally attached device.

Storage Tiering: Utilizing the storage pool concept as a stepping stone, storage tiering
analyze the most commonly used data and places it on the highest performing storage
pool. The lowest one used data is placed on the weakest performing storage pool.

This operation is done automatically without any interruption of service to the data
consumer.

Advantages of Storage Virtualization


1. Data is stored in the more convenient locations away from the specific host. In the case
of a host failure, the data is not compromised necessarily.
2. The storage devices can perform advanced functions like replication, reduplication, and
disaster recovery functionality.
3. By doing abstraction of the storage level, IT operations become more flexible in how
storage is provided, partitioned, and protected.
storage virtualization

By

 Rich Castagna
 Rodney Brown, TechTarget

What is storage virtualization?


Storage virtualization is the pooling of physical storage from multiple storage
devices into what appears to be a single storage device -- or pool of available
storage capacity. A central console manages the storage.

The technology relies on software to identify available storage capacity from


physical devices and to then aggregate that capacity as a pool of storage that
can be used by traditional architecture servers or in a virtual environment by
virtual machines (VMs).

The virtual storage software intercepts input/output (I/O) requests from


physical or virtual machines and sends those requests to the appropriate
physical location of the storage devices that are part of the overall pool of
storage in the virtualized environment. To the user, the various storage
resources that make up the pool are unseen, so the virtual storage appears
like a single physical drive, share or logical unit number (LUN) that can accept
standard reads and writes.
A basic form of storage virtualization is represented by a software
virtualization layer between the hardware of a storage resource and a host -- a
PC, a server or any device accessing the storage -- that makes it possible for
operating systems (OSes) and applications to access and use the storage.

Even a redundant array of independent disks, or RAID, array can sometimes


be considered a type of storage virtualization. Multiple physical drives in the
array are presented to the user as a single storage device that, in the
background, stripes and replicates data to multiple disks to improve I/O
performance and to protect data in case a single drive fails.

Types of storage virtualization: Block vs. file


There are two basic methods of virtualizing storage: file-based or block-based.
File-based storage virtualization is a specific use, applied to network-attached
storage (NAS) systems. Using Server Message Block in Windows server
environments or Network File System protocols for Linux systems, file-based
storage virtualization breaks the dependency in a normal NAS array between
the data being accessed and the location of physical memory.

The pooling of NAS resources makes it easier to handle file migrations in the
background, which will help improve performance. Typically, NAS systems are
not that complex to manage, but storage virtualization greatly simplifies the
task of managing multiple NAS devices through a single management
console.

Block-based or block access storage -- storage resources typically accessed


via a Fibre Channel (FC) or Internet Small Computer System Interface (iSCSI)
storage area network (SAN) -- is more frequently virtualized than file-based
storage systems. Block-based systems abstract the logical storage, such as a
drive partition, from the actual physical memory blocks in a storage device,
such as a hard disk drive (HDD) or solid-state memory device. Because it
operates in a similar fashion to the native drive software, there's less
overhead for read and write processes, so block storage systems will perform
better than file-based systems.

The block-based operation enables the virtualization management software to


collect the capacity of the available blocks of storage space across all
virtualized arrays. It pools them into a shared resource to be assigned to any
number of VMs, bare-metal servers or containers. Storage virtualization is
particularly beneficial for block storage.

Unlike NAS systems, managing SANs can be a time-consuming process.


Consolidating a number of block storage systems under a single management
interface that often shields users from the tedious steps of LUN configuration,
for example, can be a significant timesaver.

An early version of block-based virtualization was IBM's SAN Volume


Controller, now called IBM Spectrum Virtualize. The software runs on an
appliance or storage array and creates a single pool of storage by virtualizing
LUNs attached to servers connected to storage controllers. Spectrum
Virtualize also enables customers to tier block data to public cloud storage.

Another early storage virtualization product was Hitachi Data Systems'


TagmaStore Universal Storage Platform, now known as Hitachi Virtual
Storage Platform. Hitachi's array-based storage virtualization enables
customers to create a single pool of storage across separate arrays, even
those from other leading storage vendors.

How storage virtualization works


To provide access to the data stored on the physical storage devices, the
virtualization software needs to either create a map using metadata or, in
some cases, use an algorithm to dynamically locate the data on the fly. The
virtualization software then intercepts read and write requests from
applications. Using the map it has created, it can find or save the data to the
appropriate physical device. This process is similar to the method used by PC
OSes when retrieving or saving application data.

Storage virtualization disguises the actual complexity of a storage system,


such as a SAN, which helps a storage administrator perform the tasks of
backup, archiving and recovery more easily and in less time.
Network virtualization represents the administration and monitoring of an entire computer
network as a single administrative entity from a single software-based administrator’s console.
Network virtualization can include storage virtualization, which contains managing all storage
as an individual resource. Network virtualization is created to enable network optimization of
data transfer rates, flexibility, scalability, reliability, and security. It automates many network
management functions, which disguise a network's true complexity. All network servers and
services are considered as one pool of resources, which can be used independently of the
physical elements.
Virtualization can be defined as making a computer that runs within another computer. The
virtual computer, or guest device, is a fully functional computer that can manage the same
processes your physical device can. The processes performed by the guest device are separated
from the basic processes of your host device. You can run several guest devices on your host
device and each one will identify the others as an independent computer.

Advantages of Network Virtualization


The advantages of network virtualization are as follows −
 Lower hardware costs − With network virtualization, entire hardware costs are
reduced, while providing a bandwidth that is more efficient.
 Dynamic network control − Network virtualization provides centralized control
over network resources, and allows for dynamic provisions and reconfiguration.
Also, computer resources and applications can connect with virtual network
resources precisely. This also enables for optimization of application support and
resource utilization.
 Rapid scalability − Network virtualization generated an ability to scale the
network rapidly either up or down to handle and make new networks on-demand.
This is a valuable device as enterprises transform their IT resources to the cloud
and shift their model to an ‘as a service’.

Overview
Network virtualization is the transformation of a network that was once hardware-
dependent into a network that is software-based. Like all forms of IT virtualization, the
basic goal of network virtualization is to introduce a layer of abstraction between
physical hardware and the applications and services that use that hardware.

More specifically, network virtualization allows network functions, hardware resources,


and software resources to be delivered independent of hardware—as a virtual network.
It can be used to consolidate many physical networks, subdivide one such network, or
connect virtual machines (VMs) together.

With network virtualization, digital service providers can optimize their server resources
(i.e. fewer idle servers), allow them to use standard servers for functions that once
required expensive proprietary hardware, and generally improve the speed, flexibility,
and reliability of their networks.

Get ready to virtualize your network

External network virtualization vs internal network virtualization


There are two kinds of network virtualization: external virtualization and internal
virtualization. External network virtualization can combine systems physically attached
to the same local area network (LAN) into separate virtual local area networks (VLANs),
or conversely divide separate LANs into the same VLAN. This allows service providers
to improve a large network’s efficiency.

Unlike external network virtualization—which acts on systems outside of a single server


—internal network virtualization acts within 1 server to emulate a physical network. This
is typically done to improve a server’s efficiency, and involves configuring a server
with software containers. With containers, individual applications can be isolated or
different operating systems can be run on the same server.

Why care about network virtualization?


Network virtualization abstracts all IT physical infrastructure elements (compute,
network, and storage) away from proprietary hardware, pooling them together. From
this pool, resources can be deployed automatically where they are needed most as
demands and business needs change. This is especially relevant in
the telecommunications industry, where traditional providers are challenged
with transforming their networks and operations to keep up with technological
innovation.

Whether it’s virtual reality in remote surgery or smart grids allowing ambulances to
safely speed through traffic lights, new advancements offer the promise of radically
improved and optimized experiences. But the traditionally hardware-dependent
networks of many service providers must be transformed to accommodate this
innovation. Network virtualization offers service providers the agility and scalability they
need to keep up.
Just as hyperscale public cloud providers have demonstrated how cloud-
native architectures and open source development can accelerate service delivery,
deployment, and iteration, telecommunication service providers can take this same
approach to operate with greater agility, flexibility, resilience, and security. They can
manage infrastructure complexity through automation and a common horizontal
platform. They can also meet the higher consumer and enterprise expectations of
performance, safety, ubiquity, and user experience. With cloud-native architectures and
automation, providers can more rapidly change and add services and features to better
respond to customer needs and demands.

Benefits of network virtualization


Most digital service providers are already committed to network functions virtualization
(NFV). NFV is a way to virtualize network services—such as routers, firewalls, virtual
private networks (VPNs), and load balancers—that have traditionally been run on
proprietary hardware. With an NFV strategy, these services are instead packaged as
VMs or containers on commodity hardware, which allows service providers to run their
network on less expensive, standard servers.

With these services virtualized, providers can distribute network functions across
different servers or move them around as needed when demand changes. This
flexibility helps improve the speed of network provisioning, service updates, and
application delivery, without requiring additional hardware resources. The segmentation
of workloads into VMs or containers can also boost network security.

This approach:

 Uses less (and less expensive) hardware.

 Increases flexibility and workload portability.

 Provides the ability to spin workloads up and down with minimal effort.

 Allows network resources to be scaled elastically to address changing demands.

https://www.techtarget.com/searchnetworking/What-is-network-virtualization-Everything-
you-need-to-know

You might also like