Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 6

Virtualization (or virtualisation), in computing, is the creation of a virtual (rather than actual) version of [1] something, such as a hardware

platform, operating system, storage device, or network resources. Virtualization can be viewed as part of an overall trend in enterprise IT that includes autonomic computing, a scenario in which the IT environment will be able to manage itself based on perceived activity, and utility computing, in which computer processing power is seen as a utility that clients can pay for only as needed. The usual goal of virtualization is to centralize administrative tasks while improving scalability and overall hardware-resource utilization. With virtualization, several operating systems (OSs) can be run in parallel on a single CPU. This parallelism tends to reduce overhead costs and differs from multitasking, which involves running several programs on the same OS.
Contents
[hide]

1 Types of virtualization

o o o o o o o o

1.1 Hardware 1.2 Desktop 1.3 Software 1.4 Memory 1.5 Storage 1.6 Data 1.7 Network 1.8 Challenges

2 See also 3 References 4 External links

[edit]Types

of virtualization

[edit]Hardware Main article: Hardware virtualization Hardware virtualization or platform virtualization refers to the creation of a virtual machine that acts like a real computer with an operating system. Software executed on these virtual machines is separated from the underlying hardware resources. For example, a computer that is running Microsoft Windows may host a virtual machine that looks like a computer with Ubuntu Linux operating system; Ubuntu-based software [1][2] can be run on the virtual machine. In hardware virtualization, the host machine is the actual machine on which the virtualization takes place, and the guest machine is the virtual machine. The words host and guest are used to distinguish the software that runs on the actual machine from the software that runs on the virtual machine. The software

or firmware that creates a virtual machine on the host hardware is called a hypervisor or Virtual Machine Monitor. Different types of hardware virtualization include: 1. Full virtualization: Almost complete simulation of the actual hardware to allow software, which typically consists of a guest operating system, to run unmodified 2. Partial virtualization: Some but not all of the target environment is simulated. Some guest programs, therefore, may need modifications to run in this virtual environment. 3. Paravirtualization: A hardware environment is not simulated; however, the guest programs are executed in their own isolated domains, as if they are running on a separate system. Guest programs need to be specifically modified to run in this environment. Hardware-assisted virtualization is a way of improving the efficiency of hardware virtualization. It involves employing specially-designed CPUsand hardware components that help improve the performance of a guest environment. Hardware virtualization is not the same as hardware emulation: in hardware emulation, a piece of hardware imitates another, while in hardware virtualization, a hypervisor (a piece of software) imitates a particular piece of computer hardware or the whole computer altogether. Furthermore, a hypervisor is not the same as an emulator; both are computer programs that imitate hardware, but their domain of use in language differs. See also: Mobile virtualization [edit]Desktop Main article: Desktop virtualization Desktop virtualization is the concept of separating the logical desktop from the physical machine. One form of desktop virtualization, virtual desktop infrastructure (VDI), can be thought as a more advanced form of hardware virtualization: Instead of directly interacting with a host computer via a keyboard, mouse and monitor connected to it, the user interacts with the host computer over a network connection (such as a LAN, Wireless LAN or even the Internet) using another desktop computer or a mobile device. In addition, the host computer in this scenario becomes a server computer capable of [3] hosting multiple virtual machines at the same time for multiple users. As organizations continue to virtualize and converge their data center environment, client architectures also continue to evolve in order to take advantage of the predictability, continuity, and quality of service delivered by their Converged Infrastructure. For example, companies like HPand IBM provide a hybrid VDI model with a range of virtualization software and delivery models to improve upon the limitations of [4] distributed client computing. Selected client environments move workloads from PCs and other devices to data center servers, creating well-managed virtual clients, with applications and client operating environments hosted on servers and storage in the data center. For users, this means they can access their desktop from any location, without being tied to a single client device. Since the resources are centralized, users moving between work locations can still access the same client environment with their [5] applications and data. For IT administrators, this means a more centralized, efficient client environment that is easier to maintain and able to more quickly respond to the changing needs of the user and [6] [7] business.

Another form, session virtualization, allows multiple users to connect and log into a shared but powerful computer over the network and use it simultaneously. Each is given a desktop and a personal folder in [3] which they store their files. . With Multiseat configuration, session virtualization can be accomplished using a single PC with multiple monitors keyboards and mice connected. Thin clients, which are seen in desktop virtualization, are simple and/or cheap computers that are primarily designed to connect to the network; they may lack significant hard disk storage space, RAM or even processing power. Using Desktop Virtualization allows your company to stay more flexible in an ever changing market. Having Virtual Desktops allows for development to be implemented quicker and more expertly. Proper testing can also be done without the need to disturb the end user. Moving your desktop environment to the cloud also allows for less single points of failure if you allow a third party to control your security and [8] infrastructure. [edit]Software Operating system-level virtualization, hosting of multiple virtualized environments within a single OS instance Application virtualization and workspace virtualization, the hosting of individual applications in an environment separated from the underlying OS. Application virtualization is closely associated with the concept of portable applications. Service virtualization, emulating the behavior of dependent (e.g, third-party, evolving, or not implemented) system components that are needed to exercise an application under test (AUT) for development or testing purposes. Rather than virtualizing entire components, it virtualizes only specific slices of dependent behavior critical to the execution of development and testing tasks.

[edit]Memory Memory virtualization, aggregating RAM resources from networked systems into a single memory pool Virtual memory, giving an application program the impression that it has contiguous working memory, isolating it from the underlying physical memory implementation

[edit]Storage Storage virtualization, the process of completely abstracting logical storage from physical storage Distributed file system Storage hypervisor

[edit]Data Data virtualization, the presentation of data as an abstract layer, independent of underlying database systems, structures and storage Database virtualization, the decoupling of the database layer, which lies between the storage and application layers within the application stack

[edit]Network

Network virtualization, creation of a virtualized network addressing space within or across network subnets

[edit]Challenges An often overlooked issue with virtualization is the complexities of licensing. For example, a server running a Linux OS attempting to offer a virtualized Windows Server must still satisfy licensing requirements. Therefore the benefits of on-demand virtualization and flexibility of virtualization is hampered by closed-source, proprietary systems. Some vendors of proprietary software have attempted to update their licensing schemes to address virtualization, but the flexibility vs. license cost issues are opposing requirements.

Virtualization is the creation of a virtual (rather than actual) version of something, such as an operating system, a server, a storage device or network resources. You probably know a little about virtualization if you have ever divided your hard drive into different partitions. A partition is the logical division of a hard disk drive to create, in effect, two separate hard drives. Operating system virtualization is the use of software to allow a piece of hardware to run multiple operating system images at the same time. The technology got its start on mainframes decades ago, allowing administrators to avoid wasting expensive processing power. In 2005, virtualization software was adopted faster than anyone imagined, including the experts. There are three areas of IT where virtualization is making headroads, network virtualization, storage virtualization and server virtualization:

Network virtualization is a method of combining the available resources in a network by splitting up the available bandwidth into channels, each of which is independent from the others, and each of which can be assigned (or reassigned) to a particular server or device in real time. The idea is that virtualization disguises the true complexity of the network by separating it into manageable parts, much like your partitioned hard drive makes it easier to manage your files.

Storage virtualization is the pooling of physical storage from multiple network storage devices into what appears to be a single storage device that is managed from a central console. Storage virtualization is commonly used in storage area networks (SANs). Server virtualization is the masking of server resources (including the number and identity of individual physical servers, processors, and operating systems) from server users. The intention is to spare the user from having to understand and manage complicated details of server resources while increasing resource sharing and utilization and maintaining the capacity to expand later. Virtualization can be viewed as part of an overall trend in enterprise IT that includesautonomic computing, a scenario in which the IT environment will be able to manage itself based on perceived activity, and utility computing, in which computer processing power is seen as a utility that clients can pay for only as needed. The usual goal of virtualization is to centralize administrative tasks while improving scalability and work loads.

You might also like