Hype Cycle For Server Techno 251107

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 93

This research note is restricted to the personal use of 4254.gerson@bradesco.com.

br

G00251107

Hype Cycle for Server Technologies, 2013


Published: 31 July 2013

Analyst(s): George J. Weiss, Mike Chuba

This Hype Cycle report evaluates 45 server technologies in terms of their


business impact, adoption rate and maturity level to help users decide
where and when to invest.

Table of Contents

Analysis.................................................................................................................................................. 3
What You Need to Know.................................................................................................................. 3
The Hype Cycle................................................................................................................................ 4
The Priority Matrix.............................................................................................................................8
Off the Hype Cycle........................................................................................................................... 9
On the Rise.................................................................................................................................... 10
Power Adaptive Algorithms.......................................................................................................10
Digital Signal Processor Acceleration........................................................................................ 11
Quantum Computing................................................................................................................ 12
Optical System Buses...............................................................................................................15
Instruction Set Virtualization...................................................................................................... 16
Fabric-Based Computing..........................................................................................................18
Server Power Capping..............................................................................................................20
Open Compute.........................................................................................................................21
At the Peak.....................................................................................................................................23
Appliances................................................................................................................................23
High Temperature Servers........................................................................................................ 24
Servers Using Flash Memory as Additional Memory Type......................................................... 25
Extreme Low-Energy Servers....................................................................................................27
Heterogeneous Architectures................................................................................................... 30
Virtual Machine Resilience.........................................................................................................31
Server Digital Power Module Management............................................................................... 33
Cloud-Based Grid Computing.................................................................................................. 35

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

Processor Emulation.................................................................................................................37
Sliding Into the Trough....................................................................................................................38
V2P Server Management.......................................................................................................... 38
Fabric-Based Infrastructure...................................................................................................... 39
High-Density Racks (>100 kW)................................................................................................. 42
Server Repurposing.................................................................................................................. 43
VM Energy Management Tools................................................................................................. 45
Advanced Server Energy Monitoring Tools................................................................................46
Linux on RISC.......................................................................................................................... 47
Server Provisioning and Configuration Management................................................................. 49
Multinode Servers.....................................................................................................................51
Data Center Container Solutions...............................................................................................53
Climbing the Slope......................................................................................................................... 55
HPC-Optimized Server Designs................................................................................................ 55
FPGA Application Acceleration................................................................................................. 56
Graphics Card Application Acceleration.................................................................................... 58
Capacity on Demand (Unix)...................................................................................................... 60
P2V Server Management.......................................................................................................... 62
Server Virtual I/O.......................................................................................................................63
Offload Engines........................................................................................................................ 64
Shared OS Virtualization (Nonmainframe)..................................................................................65
x86 Servers With Eight Sockets (and Above)............................................................................ 67
Linux on Four- to 16-Socket Servers........................................................................................ 69
Liquid Cooling.......................................................................................................................... 71
Entering the Plateau....................................................................................................................... 72
Capacity on Demand (Mainframe)............................................................................................. 72
Mission-Critical Workloads on Linux......................................................................................... 74
Blade Servers........................................................................................................................... 77
Linux on System z.................................................................................................................... 79
VM Hypervisor.......................................................................................................................... 81
Grid Computing Without Using Public Cloud Computers.......................................................... 83
Mainframe Specialty Engines.................................................................................................... 84
Appendixes.................................................................................................................................... 86
Hype Cycle Phases, Benefit Ratings and Maturity Levels.......................................................... 88
Recommended Reading.......................................................................................................................92

Page 2 of 93 Gartner, Inc. | G00251107

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

List of Tables

Table 1. Hype Cycle Phases.................................................................................................................89


Table 2. Benefit Ratings........................................................................................................................90
Table 3. Maturity Levels........................................................................................................................91

List of Figures

Figure 1. Hype Cycle for Server Technologies, 2013...............................................................................7


Figure 2. Priority Matrix for Server Technologies, 2013........................................................................... 9
Figure 3. Hype Cycle for Servers Technologies, 2012........................................................................... 87

Analysis
What You Need to Know
This Hype Cycle provides a consolidated view of 45 server technologies and their relative positions
on a single Hype Cycle. Each year, Gartner may add new technologies and retire technologies that
achieve maturity, become obsolete or are no longer relevant. Technologies that have fallen off the
Hype Cycle because of high maturity and widespread adoption may still be discussed in an IT
Market Clock report, which covers the full life cycle to the end-of-life stage of a particular
technology, and offers investment, divestment and other asset management advice.

Server terminology is in need of revisions as data centers evolve from silos and isolated hosts to
fabric infrastructures, integrated and converged infrastructures, cloud (private, public and hybrid)
and virtualization. The network node represents a more modern concept of server. The node refers
to a resource situated on a network at some location and in some form factor available to
applications or workloads for execution. The resource can comprise a physical array of processor
cores, memory, input/output (I/O) interfaces, graphic and numeric processing elements, encryption
processing and storage. Its form factor can be a blade, racks, frames, multinode or other
configurations comprising motherboards and associated processing, storage and I/O elements. Or,
the resource could be a logical aggregation of these components and be represented by the
terminology — multinode resources, distributed locally or geographically over networks in enterprise
data centers or cloud hosting sites. The multinode resource can be a shared resource of multiple
nodes (of cores and memory, etc.) defined by workload domains, which, in turn, reside in racks on a
network.

The instantiation of servers as nodes, and nodes as resources attached to resource pools, creates
the modern fabric infrastructure. This perspective emphasizes resources as managed entities, or
instances, that are logically integrated and available to applications through aggregation and
sharing, most often abstracted from the hardware as virtual machines (VMs). Thus, the concept of

Gartner, Inc. | G00251107 Page 3 of 93

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

the server is becoming outdated — although the server as an object will remain for years a
definitional convenience in the procurement process. Users should understand that the server
concept in a modernized data center denotes oneness of identity and role specificity. Contrast this
to the broader requirements of cloud and fabric, which imply that applications should not care
about size or scale of computing, place of execution, bounded configuration, or physical housing.

The reason for the evolution in how we think about servers is due to the increase in server
processing. Servers now include many other functions and features traditionally performed by
dedicated and specialist devices. This relates to the rise of intelligent management, virtualization,
improved storage features and network connectivity. These enhancements will drive real-time
business demands of workloads, near-instant resource elasticity and compression, location and
topology independence, and energy constraints and controls. This evolution will challenge suppliers
to rethink portfolio strategies, including sizing, pricing, integration, reference architectures and
partnerships. This concept does not exclude tightly integrated mainframes or other systems as long
as the resources can be functionally diverse, heterogeneous and location independent. Not all
applications and workloads will necessarily mandate a fabric infrastructure of managed resource
pools.

The Hype Cycle


There are many questions around server technology:

■ Is the server market in a major transition from the old-line traditional vendor and general-
purpose products (IBM, HP, Dell, Oracle/Sun) to the new and aggressive integrated fabric
infrastructure solutions (VCE Vblock, Cisco, NetApp Flexpod, Oracle Exadata, IBM PureFlex,
HP CloudSystem, etc.)? Or, will radical low-power consumption processors in open computing
chassis for cloud, mobility, consumer, and virtualization drive new scale, form and energy
factors with new entrants by 2015 to 2020?
■ Is the server even a useful procurement entity or an irrelevant artifact of yesterday in a
virtualized environment?
■ What are the new, more relevant procurement evaluation factors and can you measure them?
■ How will the cloud and integrated system wars reshape the server vendor landscape?
■ What's the role of the appliance? Where do traditional reduced instruction set computer (RISC)
and mainframe architectures and products fit going forward?

The traditional concept of the role-specific server has been undergoing a long-term evolutionary
process, but the intensity has picked up, causing traditional approaches to be called into question.
Vendors are feverishly churning out new and special-purpose optimized solutions, appliances and
integrated offerings that may deliver additional value to users, but at higher profit levels for the
vendors.

New technologies, new modes of computing, infrastructure virtualization, private cloud computing,
integrated systems and appliances, and automation are at the heart of the changes that will affect
the server equipment that populates today's data centers. Data center virtualization, fabric-based
computing, software-defined networking and storage, extreme low-energy consumption

Page 4 of 93 Gartner, Inc. | G00251107

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

microprocessors and systems, big data storage growth and convergence, and cloud computing are
just a few of the examples of these dynamic changes that will affect decision making. In dynamic
and virtualized infrastructures, the role of the operating system is becoming a less important focus
than the infrastructure as a service and cloud infrastructure. The 2013 iteration of the server
technologies Hype Cycle (see Figure 1) provides valuable insights into many of the technologies
that will shape the evolution of the server market over the next 10 years.

"Hype Cycle for Data Center Power and Cooling Technologies, 2013" and "Hype Cycle for
Virtualization, 2013" are two companion reports that are valuable additions to the Gartner portfolio
of Hype Cycles. We have attempted to minimize the overlap of technologies across these Hype
Cycles, but several technologies have been developed out of these initiatives that merit inclusion in
the server technologies Hype Cycle.

Several entrants in the 2013 server Hype Cycle, present on the 2012 Hype Cycle, remain ones to
watch. The Open Compute Project is focused on improving efficiencies for data center
infrastructure. Led by Facebook, the idea of this consortium is that more openness and
collaboration will likely mean a faster pace of innovation in infrastructure technology, greater
accessibility to the best possible technology for all, more efficiency in scale computing and a
reduced environmental impact through the sharing of best practices. While organizations with a
large, highly scaled-out infrastructure are those likely to benefit the most in the short term, the
potential advances can have a ripple impact across traditional enterprises and their data centers.
Open compute could impact extreme low-energy servers, multinode servers, HPC optimized server
designs.

Another entry in transition on this Hype Cycle focuses on appliances. "Appliances" is a term that
encapsulates many aspects of integrated systems and solutions. It can have many meanings and
interpretations and, therefore, vendor and market derivatives (see "How IT Departments Must
Change to Exploit Different Types of Appliances"). However, appliances are more than just bundles
and marketing, and offer joint technology, hardware, software management and services. Initially,
appliances were single-function systems, but they have evolved to also include multiple-function
single stacks, as well as soft appliances — a single function workload on a virtual machine,
functioning as a separate "bubble."

There are a number of entrants that have seen significant movement over the past year.

■ HPC-optimized server designs saw the most dramatic movement during the past year. Several
vendors focused on creating more-workload-specific or optimized special-purpose designs that
can support higher margins than general-purpose machines, and we expect HPC-optimized
servers to move fully to the end of the Plateau of Productivity by 2014.
■ High-density racks (>100 kilowatts) continue to move up rather quickly. Expect to see 100kW
racks by 2015. As density and performance increase, expect to see servers deployed, managed
and retired by the rack.
■ VM energy management tools also advanced significantly over the past year. VM energy
management tools will continue to improve during the next few years. However, uptake of these

Gartner, Inc. | G00251107 Page 5 of 93

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

tools still remains low, because users are unclear about the ways in which the products can
provide short-term benefits.
■ Multinode servers (referred to as skinless servers in prior Hype Cycles) also advanced
significantly. Multinode servers are a more rack-dense form factor that have emerged in the
past four years to address many extreme scale-out workload requirements that, ironically,
blades were first designed to address. True multinode servers typically lack the availability of
the richer tooling that benefits blade environments, so the two form factors address distinctly
different workload needs. The rapid growth of multinode servers has largely contributed to the
stalling of the blade server market, because cannibalization from multinode servers has eroded
the growth that blades are gaining through increased market adoption of fabric-based
infrastructure (FBI). Based on Gartner's 2012 data, multinode servers outsell blade servers in
unit terms (despite being sold for only a few years and addressing a more limited volume
market). In 2012, multinode servers represented 15% unit share (nearly doubled from 7.8% in
2011) and 10% revenue share (again nearly doubled from 5.3% in 2011). This represents over
14,000% unit share increase since we started measuring this market in 2010, and helps explain
why this category has made rapid progress through the Hype Cycle.
■ x86 Servers with eight sockets (and above) have advanced rapidly for the second year in a row.
The long-term viability of the eight-socket-and-larger market will be determined by the pace of
adoption of alternative technologies, such as extreme low-energy servers and fabric computing,
because these emerging technologies could make the need for single, large-scale x86 servers
obsolete before the eight-socket technology reaches the Hype Cycle's Plateau of Productivity.
This does not mean the market need for large, single images will dissipate, but the market need
will increasingly be addressed by fabric-based servers that can be aggregated to satisfy
combinations of scale-up and scale-out requirements.

The one new entrant to this year's Hype Cycle is for high-temperature servers. High-temperature
servers assume substantial savings in electricity can be achieved by reducing the operation of
facilities' cooling fans and chillers, and letting IT equipment run hot. As part of the ongoing effort to
reduce energy consumption, data center managers, as well as server technologists, are starting to
lobby for higher-temperature operation. Fear about temperature-related failures will likely be a
gating factor. Thus, it will take two-to-five years for high-temperature servers to penetrate the
market.

We have chosen to rename one Hype Cycle entrant to better describe the technology. Skinless
servers has been changed to multinode servers.

Page 6 of 93 Gartner, Inc. | G00251107

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

Figure 1. Hype Cycle for Server Technologies, 2013

expectations
Heterogeneous Virtual Machine Resilience
Architectures
Extreme Low-Energy Servers Server Digital Power Module Management
Servers Using Flash Memory Cloud-Based Grid Computing
as Additional Memory Type Processor Emulation
High Temperature Servers Mainframe Specialty Engines
Appliances V2P Server Management Grid Computing Without Using
Open Compute Public Cloud Computers
Server Power Capping Fabric-Based Infrastructure Mission-Critical Workloads
Fabric-Based Computing High-Density Racks (>100 kW) on Linux
Capacity on Demand
(Mainframe)
Instruction Set Virtualization Server Repurposing
Liquid Cooling VM Hypervisor
VM Energy Management Tools Linux on System z
Advanced Server Blade Servers
Optical System Buses Linux on Four- to 16-Socket Servers
Energy Monitoring
Tools x86 Servers With Eight Sockets (and Above)
Linux on RISC Shared OS Virtualization (Nonmainframe)
Offload Engines
Quantum Computing Server Virtual I/O
Server Provisioning and P2V Server Management
Digital Signal Processor Acceleration Configuration Management
Capacity on Demand (Unix)
Power Adaptive Algorithms Multinode Servers Graphics Card Application Acceleration
Data Center Container Solutions FPGA Application Acceleration
HPC-Optimized Server Designs
As of July 2013

Innovation Peak of
Trough of Plateau of
Trigger Inflated Slope of Enlightenment
Disillusionment Productivity
Expectations
time
Plateau will be reached in:
obsolete
less than 2 years 2 to 5 years 5 to 10 years more than 10 years before plateau

Source: Gartner (July 2013)

Gartner, Inc. | G00251107 Page 7 of 93

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

The Priority Matrix


The Priority Matrix (see Figure 2) maps the benefit rating for each technology against the length of
time before Gartner expects it to reach the beginning of mainstream adoption. This alternate
perspective can help users determine how to prioritize their server technology investments. Many
are keyed to server virtualization (including provisioning and configuration management), but others
are infrastructure improvements, such as fabric and cloud, chip, board and system design and
scaling enhancements, along with important management and monitoring tools, OS enhancements
and data center management. In general, companies should begin in the upper left quadrant of the
Priority Matrix, where the technologies will have the most dramatic effects on business processes,
revenue or cost-cutting efforts (transformational), and are available now or will be in the near future.

Some of the technologies have already advanced toward the furthest end of the Plateau of
Productivity (such as Linux on System z, VM hypervisor and mainframe specialty engines). Those on
the Slope of Enlightenment should be noted for their movement close to the Plateau of Productivity,
such as graphic card acceleration, capacity on demand (Unix) and server virtual I/O.

Page 8 of 93 Gartner, Inc. | G00251107

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

Figure 2. Priority Matrix for Server Technologies, 2013

benefit years to mainstream adoption

less than 2 years 2 to 5 years 5 to 10 years more than 10 years

transformational Quantum Computing

high Advanced Server Energy Cloud-Based Grid Fabric-Based Computing Open Compute
Monitoring Tools Computing
Heterogeneous
Capacity on Demand Extreme Low-Energy Architectures
(Mainframe) Servers
Optical System Buses
Grid Computing Without FPGA Application
Using Public Cloud Acceleration Servers Using Flash
Computers Memory as Additional
Graphics Card Application Memory Type
HPC-Optimized Server Acceleration
Designs
High Temperature Servers
Linux on System z
High-Density Racks (>100
Liquid Cooling kW)
Mainframe Specialty Multinode Servers
Engines
Server Provisioning and
Mission-Critical Configuration
Workloads on Linux Management
P2V Server Management Shared OS Virtualization
(Nonmainframe)
Server Virtual I/O
V2P Server Management
VM Energy Management
Tools
VM Hypervisor

moderate Blade Servers Capacity on Demand Digital Signal Processor


(Unix) Acceleration
Offload Engines
Data Center Container Fabric-Based
x86 Servers With Eight Solutions Infrastructure
Sockets (and Above)
Linux on Four- to 16- Instruction Set
Socket Servers Virtualization
Server Digital Power Power Adaptive
Module Management Algorithms
Server Repurposing Processor Emulation
Virtual Machine Resilience Server Power Capping

low Appliances
Linux on RISC

As of July 2013

Source: Gartner (July 2013)

Off the Hype Cycle


These entries have been removed from the Hype Cycle:

■ Skinless servers — This technology analysis has been renamed multinode servers.

Gartner, Inc. | G00251107 Page 9 of 93

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

■ High-performance computing clusters: Windows — This entry has been removed from the
server technologies Hype Cycle because it has moved off the Plateau of Productivity.
■ Multicore processors — This entry was taken off the Hype Cycle because, as we forecast a year
ago, it has reached a stable level of maturity.

On the Rise

Power Adaptive Algorithms


Analysis By: Carl Claunch

Definition: Power adaptive algorithms select their method, changing the intensity of the computing
resources (thus energy use) to accomplish different objectives for power savings or service levels.
This requires the designer of the adaptive software to consider multiple ways to accomplish the
function, code each method and provide a mechanism to switch between them based on the
desired policy.

Position and Adoption Speed Justification: Because this is an entirely new way of thinking about
OSs, middleware and applications, it will take time before developers routinely include power
adaptive strategies in their code. Typical users will run the current version of the software for
perhaps as long as five or 10 years before changing versions, delaying the use of the newest
version, which might have power adaptive functions. Finally, until a more standardized mechanism
arises to define power strategies and communicate them to software products from different
developers who are capable of power adaptive operations, the practical use of these new
approaches will be limited. This means the implementation of power adaptive algorithms will be a
long-term effort; however, as developers make green concerns an increasing priority, there will be
progress. Much research is being done in universities and laboratories, but few products provide
multiple, selectable algorithms that vary performance and energy usage. The exception is in mobile
devices, where these techniques are being adopted to improve the effective battery life or
accommodate highly variable network performance. As a result of the challenges, this use case has
not progressed much over the past few years, but is sustaining its modest present level of interest
due to the potential benefits and to the long-term importance of power efficiency.

Here are three example adaptive policies:

■ Users preferring maximum greenness will ask the software to use the fewest resources
consistent with meeting deadlines.
■ Power loads and temperatures in the data center and near the server are monitored, the user
selecting lower-performance methods during temperature or power peaks, and otherwise
opting for higher-performance methods.
■ The system will switch its operation based on whether a device is on batteries, using an
uninterruptible power supply (UPS) or operating during periods of high drain on the electrical
grid. That is, it operates normally until the UPS kicks in; then, everything goes into an energy-
frugal mode, where some services are switched off or suspended, or a more relaxed set of
service-level targets may be activated.

Page 10 of 93 Gartner, Inc. | G00251107

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

A more sophisticated implementation might switch among the approaches of various resource
requirements based on the service-level guarantee set for users and on the progress this
application has made toward its goals. When the application is running on lightly loaded systems or
is ahead of the planned completion date, the software could toggle to lower power consumption
and a slower method. It would do the opposite if it were falling behind on deadlines or response
time commitments. This would provide automatic green savings whenever possible.

Many functions in OSs and other software have been coded with no regard to the economical use
of resources. A function may run in the background, burning cycles looking for things to do, when
an alternative design might awaken the function only as it is needed, or perhaps awaken it only
when a minimum queue of similar tasks is ready to be handled. Parts of the OS and hypervisors
need to be serialized, so that only one core in the entire system is running the particular sequence
of code at any time. A common technique for supporting this is to use a spin lock, in which other
cores waiting for their chance to run the code continually test the lock in a tight loop until it
becomes free. This approach consumes energy, while a more complex approach might leave the
waiting cores in a low-power state until the lock is free.

User Advice: Keep this technology on your list for longer-range technology scans. Be prepared to
reap environmental-sensitivity benefits, as well as concrete power/heat management
improvements, once products emerge that implement this approach.

If this is to be a priority selection criterion for your organization during future purchases, notify your
key suppliers. This will help them prioritize your preferences among other requirements and desires
during product development.

As products that use power adaptive algorithms come to market, add the availability of such
algorithms to your decision framework as an additional selection criterion at the appropriate
weighting for your organization.

Business Impact: The primary value of power adaptive algorithms is in helping businesses meet
their corporate citizenship responsibilities and objectives, and in adopting more environmentally
sensitive technologies in their IT environments. For businesses with power or heat constraints,
power adaptive algorithms may enable the deferral of a data center expansion or construction.

Benefit Rating: Moderate

Market Penetration: Less than 1% of target audience

Maturity: Embryonic

Digital Signal Processor Acceleration


Analysis By: Carl Claunch

Definition: This analysis covers the use of digital signal processor (DSP) chips to run parts of an
application, leveraging the pipeline-based nature of the DSP to accelerate the execution compared

Gartner, Inc. | G00251107 Page 11 of 93

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

with its behavior on traditional processors. By minimizing data movement for algorithms that are
otherwise memory-bandwidth-limited, DSPs might increase compute rates.

Position and Adoption Speed Justification: The recent successes leveraging graphics processing
units (GPUs) to accelerate high-performance computing (HPC) applications have increased the
interest in other forms of accelerators, such as field-programmable gate arrays (FPGA) or DSPs.
When the nature of the application is a filter that runs all the input data through some set of
transformations, its performance on traditional systems is inadequate and the constraints might be
relieved through conversion to the data flow orientation, then a DSP is a good candidate for use.

Due to the nascent state of DSPs as technical computing accelerators, in the near term they will
mainly appeal to users who are comfortable working at the experimental, leading edge of the
market. The relative immaturity, the small fraction of all applications that will fit a data filter model,
the limited tools and skills available, and other constraints are the main barriers to faster or wider
adoption.

User Advice: DSPs are specialized and will fit software that can be centered on the notion of a
pipeline through which large amounts of data are funneled, applying substantially the same
processing to all the data elements. Before proceeding to a pilot project, users should ensure that
the software being evaluated matches this pattern, has been primarily limited by data movement
inside the server in its current version and can be rewritten to exploit a DSP. Potential users should
verify that the improvement in speed is significant, warranting the uniqueness of the DSP approach
and the effort involved in adapting the application to this technology. Software providers should
understand how DSPs can be leveraged and make use of it when a good fit is found that delivers
compelling additional value to buyers of their software products.

Business Impact: DSPs may make hitherto impractical applications feasible to run, or allow them
to be run more often, providing benefits such as improved productivity for the users of the
application.

Benefit Rating: Moderate

Market Penetration: Less than 1% of target audience

Maturity: Embryonic

Sample Vendors: Analog Devices; Freescale; Texas Instruments (TI)

Quantum Computing
Analysis By: Jim Tully

Definition: Quantum computers use quantum mechanical states for computation. Data is held in
quantum bits (qubits), which have the ability to hold all possible states simultaneously. This
property, known as "superposition," gives quantum computers the ability to operate exponentially
faster than conventional computers as word length is increased. The data held in qubits is
influenced by data held in other qubits, even when physically separated. This effect is known as
"entanglement." Achieving both superposition and entanglement is extremely challenging.

Page 12 of 93 Gartner, Inc. | G00251107

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

Position and Adoption Speed Justification: A large number of technologies are being researched
to facilitate quantum computing. These include:

■ Optical lasers
■ Superconductivity
■ Nuclear magnetic resonance
■ Quantum dots
■ Trapped ions

No particular technology has found favor among a majority of researchers, supporting our position
that the technology is in the relatively early research stage.

Some classes of problem would be executed extremely fast with quantum computers, including:

■ Optimization
■ Code breaking
■ DNA and other forms of molecular modeling
■ Protein folding
■ Large database access
■ Encryption
■ Stress analysis for mechanical systems
■ Pattern matching
■ Image analysis

A few of these applications rely on algorithms that have been developed specifically for quantum
computers. Many of these algorithms produce an output in a probability form, requiring multiple
runs to achieve a more accurate result. One example is Grover's algorithm, designed for searching
an unsorted database. Another is Shor's algorithm, for integer factorization. Many of the research
efforts in quantum computing use one of these algorithms to demonstrate the effectiveness of their
solution. The first execution of Shor's algorithm, for example used nuclear magnetic resonance
(NMR) techniques, and took place at IBM's Research Center, Almaden and Stanford University in
2001. Since then the focus has been on increasing the number of qubits available for computation,
but this is proving to be very challenging. IBM had demonstrated factorization of the number 15
using five qubits. The latest published achievement is a factorization of the number 21 at the
University of Bristol in 2012. The technique used in that case was to reuse and recycle qubits during
the computation process in order to minimize the required number of qubits. The practical
applications indicated by these examples are clearly very limited in scope.

In February 2007, D-Wave Systems demonstrated a 16-qubit quantum computer, based on a


supercooled chip arranged as 4x4 elements. The company followed this with longer qubit

Gartner, Inc. | G00251107 Page 13 of 93

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

demonstrations. Lockheed Martin subsequently purchased a D-Wave One computer where it is in


operation at the University of Southern California's facility. Within the past few months a 439-qubit
system was demonstrated by the D-Wave Systems.

To date, D-Wave's demonstrations have involved superposition, but have not demonstrated
entanglement. Therefore, D-Wave has focused its attention on the use of quantum techniques for
adiabatic processing for optimization purposes; a topic known as "quantum annealing." This
technique finds the mathematical minimum in a dataset very quickly. There are many types of
problems where quantum adiabatic processing will provide a significant improvement in the scale of
the problem that needs to be addressed. Google, for example, is collaborating with D-Wave in the
area of machine-learning research. However, without quantum entanglement, D-Wave computers
cannot attack the major algorithms demonstrated by the smaller quantum computers that do
demonstrate entanglement.

Most of the research we observe in quantum computers relates to specialized and dedicated
applications. We are gradually forming the opinion that general-purpose quantum computers will
never be realized. They will instead be dedicated to a narrow class of use — such as the
optimization engine of D-Wave Systems. This suggests architectures where traditional computers
offload specific calculations to dedicated quantum acceleration engines.

Qubits must be held and linked in a closed quantum environment and must not be allowed to
interact with the outside world, because they are very susceptible to the effects of noise. Two
stages are involved in quantum computation. Stage one involves execution of the algorithm, and
stage two is the measurement of the resulting data. Measurement is extremely difficult and, typically
results in decoherence (destruction of the quantum state) as this involves interaction with the
outside world.

Considerable problems exist in increasing the number of linked qubits available for computation,
because of noise. The slightest amount of noise or interference will cause the system to drop out of
the quantum state and generate random results.

This noise is minimized using two techniques:

■ Operating at very low temperatures using superconductors close to absolute zero.


■ Enclosing the system within an intense magnetic field (or a comparable shielding scheme) for
isolation reasons.

Shielding is probably the biggest single problem in quantum computing. In practical quantum
computers, total isolation would not be feasible — so error correction schemes are being developed
to compensate for small amounts of interference. Much of the current research on quantum
computing is focused on these error correction schemes. Averaging out errors through multiple
computations is the most promising approach, because it is not clear that fundamental quantum
noise can be reduced. The challenge is to achieve a runtime long enough to facilitate error
correction. IBM places this threshold at 10 to 100 microseconds. Some kinds of quantum
cryptography actually make use of this difficulty in maintaining the quantum state. In quantum key
distribution, for example, unauthorized access to the key can be detected through observation of
the destroyed quantum state.

Page 14 of 93 Gartner, Inc. | G00251107

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

The technology continues to attract significant funding, and a great deal of research is being carried
out. However, we have not seen any significant progress on the topic over the past year and we
have therefore left the technology's position on the Hype Cycle unchanged.

User Advice: If a quantum computer offering appears, check on the usefulness across the range of
applications that you require. It will probably be dedicated to a specific application and this may be
too narrow to justify a purchase. Check if access is offered as a service. This may be sufficient, at
least for occasional computing requirements. Some user organizations may require internal
computing resources, for security or other reasons. In these cases, use of the computer on a
service basis — at least initially — would offer a good foundation on which to evaluate its
capabilities.

Business Impact: Quantum computing could have a huge effect, especially in areas such as
optimization, code breaking, DNA and other forms of molecular modeling, large database access,
encryption, stress analysis for mechanical systems, pattern matching, image analysis and (possibly)
weather forecasting. "Big data" analytics is likely to be a primary driver over the next several years.

Benefit Rating: Transformational

Market Penetration: Less than 1% of target audience

Maturity: Embryonic

Sample Vendors: D-Wave Systems; Delft University of Technology; IBM; Stanford University;
University of Bristol; University of Michigan; University of Southern California; Yale University

Optical System Buses


Analysis By: Carl Claunch

Definition: Internal optical interconnect refers to optical signaling used to replace electrical
connections in system buses. Optical transducers will appear integrated into memory, interconnects
and processor modules, significantly reducing pin counts, while boosting performance.

Position and Adoption Speed Justification: Intel and other vendors are demonstrating silicon-
based detectors and modulators. Light sources using silicon bonding are on the horizon. Optical
signals can be carried in silicon, without requiring discrete fiber-optic cables. These devices can be
built on standard complementary metal-oxide semiconductor (CMOS) processes, and can,
therefore, be integrated into processors and other system components:

■ IBM Research has its active silicon integrated nanophotonics project.


■ Intel Labs is working on its silicon photonics efforts, and has announced substantial progress
toward production during the recent Open Compute Summit in January 2013.
■ The Helios (pHotonics ELectronics functional Integration on CMOS) consortium in the European
Union is driving similar work.

Gartner, Inc. | G00251107 Page 15 of 93

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

■ The U.S. Defense Advanced Research Projects Agency (DARPA) is managing an


Ultraperformance Nanophotonic Intrachip Communications program.

Performance of optical interconnects can scale through clock speed increases, the use of multiple
wavelengths per fiber and differential focusing. Beams of light can cross without signal interference,
supporting easier layouts than conductive connections. Copper buses are reaching the limits of
their performance, and they are moving beyond the limits of reasonable-cost connectors. The ability
to support increasing performance through the use of ever-higher frequencies over copper
conductors becomes more challenging in each generation, and will eventually fail to deliver the rate
of capacity improvements we've seen historically under Moore's Law. As fabric-based servers
emerge, optical interconnect will be adopted because of its simplicity, scalability and performance.

Initial implementations inside a server will convert copper buses to optical links in separate
packages on the circuit board. These packages will rapidly be integrated into multichip modules.
Ultimately, optical circuit boards and optical chip distribution will emerge, but such technologies are
years away from practical deployment.

Optical system buses have great potential to displace technologies such as HyperTransport and
QuickPath Interconnect (QPI) through single- or dual-fiber interfaces. A single industry standard
may also emerge.

Activity in this area has increased in technology provider conversations and more basic R&D
activity, reflecting progress toward commercialization.

User Advice: Plan for server density and performance scaling to continue through at least 2022,
supported, in part, by a transition to optical system buses before the end of this period.

Business Impact: Optical interconnects will enable systems to become more compact by reducing
pin counts and bus widths. Racks using internal optical fabric could contain 1,000 or more servers,
all interconnected with an optical backplane at high bus speeds. Components and facilities will
become shareable across racks and throughout entire data centers. This technology solves one of
the issues that would have stopped data center processing capacities from continuing to grow on
an exponential path.

Benefit Rating: High

Market Penetration: Less than 1% of target audience

Maturity: Embryonic

Sample Vendors: IBM; Intel

Instruction Set Virtualization


Analysis By: Carl Claunch

Definition: Instruction set virtualization emulates the instruction set of one processor type by
hardware, firmware and/or software running on a different processor type. It can apply emulation to

Page 16 of 93 Gartner, Inc. | G00251107

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

individual programs or on entire virtual machines. A user who owns software developed to run on
one chip type would be able to use it on a server with incompatible chips. The second type
emulates the entire system, allowing an OS built for the emulated chip type to boot up and run
software as if it were running on a physical machine of the emulated architecture.

Position and Adoption Speed Justification: Although this type of technology has been used for
years on mainframes and in client computing, it is less well-known by users of reduced instruction
set computer (RISC) and x86 servers. HP provides this capability on its Itanium-based servers
running HP-UX, permitting Precision Architecture Reduced Instruction Set Computer (PA-RISC)
code to run unchanged. The manifest advantages of instruction set virtualization capability include
enabling users to run versions of software long after they have eliminated the hardware on which
they first ran it. It can potentially gain flexibility, efficiency and cost savings by enabling a single pool
of servers to run workloads that previously required several incompatible machines. The growing
realization of the benefits of instruction set virtualization, the relative ease with which it can be
accomplished, and the many successful examples should drive up the hype and expectations
concerning this capability during the next few years.

An example of this was the ability of a PowerPC-based Apple Mac to transparently run software
built for older Motorola 68K architectures and an x86-based Apple system to run software built for
the older PowerPC chips. An example of a systemwide implementation is the QEMU open-source
software, which can run MIPS, Alpha, ARM, SPARC and PowerPC environments as virtual
machines on an x86 hardware platform.

However, a technically elegant and well-performing solution is not enough to make a product line
technology transition go smoothly. Other factors and competitive strains can arise that swamp the
upsides of code portability from this technology — just look at the recent turbulence over Itanium
and HP-UX systems to see these other factors in operation.

Although instruction set virtualization may execute software perfectly, if the seller of that software
refuses to support it while running under instruction set virtualization, then few would consider
running important production work while unsupported. The maturation of this technology involves
more than just technical issues. Another issue to address is the appropriate software license fees.
For example, if a software maker charges more for one processor type than another, and a user of
the software is running it on what appears to be the more-expensive server type (higher software
fee), but is doing it through instruction set virtualization on actual hardware that is of a less-
expensive type, then will the software fees for the user be based on the real physical processor type
or the emulated processor type, or will the user be charged a third, different rate?

The ecosystem of software and peripheral product makers would need to adjust to this mode of
use. Performance under instruction set virtualization may be noticeably poorer than on the native
chip it is emulating, except that if an older and slower native chip is replaced by virtualization on a
faster and newer processor, then the net result could be better overall performance. Finally, several
products and companies offering such instruction set virtualization have been acquired or legally
blocked from selling the capability; for example, Transitive was acquired by IBM in 2008, and its
technology remains the foundation of IBM's PowerVM Lx86 feature. Thus, competitive dynamics
will also affect the rate at which this matures and grows in the market.

Gartner, Inc. | G00251107 Page 17 of 93

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

User Advice: If you are offered instruction set virtualization to continue running older applications or
entire server images while shifting to different server types, then ensure that the makers of all the
software and hardware you use will support the products in this mode. Seek commitments for the
performance you expect to experience under such virtualization and, if those performance levels are
critical to the business, seek guarantees that needed service levels can be delivered.

Business Impact: Instruction set virtualization that enables the use of older software for a longer
period can reduce the need for expensive migrations and end-user retraining, as well as the
purchase of alternative software. The longer-range potential to run different OSs for multiple chip
types as VMs under a single hardware platform extends the consolidation benefits of virtualization
across islands of incompatible system types.

Benefit Rating: Moderate

Market Penetration: Less than 1% of target audience

Maturity: Emerging

Sample Vendors: Apple; Bull; Fujitsu; HP Integrity; IBM; Stromasys; Unisys

Fabric-Based Computing
Analysis By: Andrew Butler; George J. Weiss; Donna Scott

Definition: Fabric-based computing (FBC) is a modular form of computing in which a system can
be aggregated from separate (or disaggregated) building-block modules connected over a fabric or
switched backplane. Unlike fabric-based infrastructure (FBI), which groups and packages existing
technology elements in a fabric-enabled environment, the technology ingredients of an FBC solution
will be designed solely around the fabric implementation model.

Position and Adoption Speed Justification: The early market for FBC largely stalled in 2011 and
2012, as the recession contributed to most FBC vendors (like Fabric7 Systems, Liquid Computing
and 3Leaf Systems) going out of business. Egenera is an FBC that survived the recession, but no
longer sells FBC hardware and, instead, is focused on management software. Consequently, FBI
solutions have made the greatest short-term progress, as they can be implemented more easily and
quickly from more-conventional (and thus proven) technology elements. New vendors, such as
Nutanix, have emerged that will help stimulate the market opportunity for FBC vendors in 2013 and
beyond. HP's Moonshot server launch is also based on technology that could evolve into the FBC
category.

In its ultimate form, FBC becomes a fully disaggregated set of technology elements, with separate
processor, memory, input/output (I/O), storage (disk and flash), graphics processor units (GPUs),
network processor units (NPUs) and other offload modules. This is a theoretical model that will take
many years to evolve. These are connected to a switched interconnect and, importantly, to the
software required to configure (aggregate and/or partition) and manage the resulting systems.
Independent scaling is achieved as the fabric interconnects all internal communication, including
processor-to-memory, processor-to-processor and memory-to-network communication. The speed

Page 18 of 93 Gartner, Inc. | G00251107

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

and latency of the fabric are critical to the performance of the logical server designs. The design
principles of FBC heavily influence the hardware platform strategies of most data center vendors.

With all system vendors espousing a commitment to infrastructure convergence, interest in fabric
enablement has multiple drivers, such as virtualization, data center rationalization and private/hybrid
cloud. This is helping to drive the short-term growth of FBI solutions, which, in turn, will stimulate
the development of hardware disaggregation and management tool automation that is needed for a
proper FBC, within certain constraints such as memory/CPU proximity.

User Advice: No single fabric approach will work for all IT organizations. Therefore, IT leaders will
need months of planning to map internal technologies, tools, application modernization, business
practices and operational processes. An FBC remains a development goal for most vendors.
However, existing FBI solutions will deliver some of the objectives, while enabling data centers to
evolve their organizational structures to better implement future platforms, where the separation of
compute, storage and networking elements will become seamless. Because FBI delivers many of
the promises of infrastructure convergence, this puts pressure on FBC vendors to demonstrate
compelling and unique differentiation not only versus best-of-breed hardware, but also from FBI
solutions. Effective deployment of FBC benefits most from a fresh and open approach to internal
data center governance; therefore, investing organizations must define closer collaboration between
the development and administration teams responsible for storage, server and networking
implementation. Organizations should look for tools and guidance to implement this effectively,
such as workshops and maturity models.

Business Impact: Major data center vendors are selling the promises of FBC integrated as part of
existing unified and converged FBIs, although the goal of tight integration across management tools
for different technologies has yet to mature. As with FBI, FBC forecasts business benefits in terms
of reduced infrastructure and cabling costs, increased flexibility, and an infrastructure onramp to
cloud services. As the technology matures, a common management framework that is increasingly
automated will provide additional operational benefits. The highly granular nature of FBC means
that people should be able to buy what they need, when they need it. However, this is all a work in
progress. The intelligence to optimize the environment in a dynamic way (e.g., real-time
infrastructure architectures) with automated reconfiguration will fall on the client to design and
integrate, or will demand that system integration specialists and vendor partners deliver end-to-end
services. As the concepts of FBI and FBC converge, users should be able to leverage more
virtualization benefits, by enabling flexible workload placement and optimization that meet required
performance, availability and efficiency.

Benefit Rating: High

Market Penetration: Less than 1% of target audience

Maturity: Emerging

Sample Vendors: Egenera; HP; Nutanix

Recommended Reading: "The Impact of Fabric-Based Computing on the Server Market"

Gartner, Inc. | G00251107 Page 19 of 93

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

Server Power Capping


Analysis By: Rakesh Kumar

Definition: Server power capping involves the use of software tools that limit the amount of
electrical power allocated to a server. When that threshold is reached, the tool will restrict further
workloads. Although this approach seems logical and simple, it masks a great deal of complexity
about system integrity, software support, etc.

Position and Adoption Speed Justification: Server power-capping tools have been available for
more than four years and, although they have matured, they require careful planning. Organizations
use and operate servers in different ways. Data centers have service and performance levels that
need to be met, and the capping of a server's capacity and performance based on energy use
should never compromise those service levels. Using a power-capping tool will restrict or govern
workload capacity based on defined policies that comprise technology, business and financials.
However, many data centers are under pressure to better use their hardware assets and lower the
operational costs of running their data centers. Therefore, the use of server power-capping tools will
increase during the next few years.

Another issue that users need to be aware of is that system vendors will attempt to use these tools
for competitive advantage by embedding their tools in their hardware products in a proprietary
manner. Moreover, they will link these tools to their own system and network management
consoles. Although most APIs will be open enough to allow users to integrate the tools with other
products, this requires a degree of software development and maintenance.

Over the last year, most system vendors have improved their server power capabilities, and
organizations have started to use the technique in noncritical environments. While still a niche
technology, more organizations are beginning to accept the technologies in selective development
and some low-key production applications.

User Advice: Server power-capping tools will need to be integrated with other data center system
and performance management tools, as well as other data center infrastructure management
(DCIM) products. Data center efficiency dashboards should incorporate data from these tools.
Users need to understand the effect of these tools on operational processes, especially in the areas
of performance, availability and service delivery. Infrastructure teams must work closely with their
operational teams to ensure that the power-capping tools are used successfully. Power-capping
tools also enable administrators to reduce the risk of computer overload, which could otherwise
drive server power consumption beyond the limit of the data center and cause a catastrophic
shutdown.

Business Impact: The appropriate use of these power-capping tools will enable data center
managers to better manage the energy portions of their budgets and will help internal, energy-
based chargeback.

Benefit Rating: Moderate

Market Penetration: Less than 1% of target audience

Page 20 of 93 Gartner, Inc. | G00251107

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

Maturity: Embryonic

Sample Vendors: CA Technologies; HP; IBM

Recommended Reading: "How to Select and Implement DCIM Tools"

Open Compute
Analysis By: Adrian O'Connell

Definition: The Open Compute Project is an open, community-based project, originated by


Facebook in April 2011. Its aim is to achieve the greatest efficiencies for data center infrastructure.
It is focused on releasing specifications and designs for data center infrastructure elements, with
the community then able to identify areas for further improvement. Initial elements released included
mechanical and electrical designs, motherboards, power supplies, and server chassis. These have
expanded to include storage (Open Vault), racks and hardware management.

Position and Adoption Speed Justification: The approach taken by the Open Compute Project
has the potential to be highly disruptive to the market. As of May 2011, more than 100 organizations
had joined the project, including Intel, HP, Rackspace and Dell. This roughly doubles the number of
participants compared with a year ago. In simple terms, this collaborative, community-based
approach could be seen as the ultimate level of hardware commoditization. This area has
importance beyond simple commoditization however. Given the requirements of the initial users
here, Open Compute is also spurring innovation in areas that address these needs, such as
disaggregation, extreme low-energy (ELE)-based systems and management software compatibility.

Intel has in the past talked a lot about the concept of the "standard high-volume" server — which
represented the ubiquity of x86-based platforms as they were becoming ever more capable in the
market. However, although it is a relatively commoditized segment, there is still a lot of innovation
and differentiation between the various server hardware providers. The industry has never reached
the point where there is a common building block available from all providers that is completely
interchangeable.

Server hardware providers normally do a number of things or have a number of core competencies,
such as apply their own R&D to optimize design; select sourcing for components; integrate
peripherals; market and sell through channels and direct sales; and integrate OS, middleware and
applications into solutions. However, the community-based approach, which is what Open
Compute is driving, has the potential to shift the relevance of these traditional server vendor
competencies.

Much of the traditional needs for service and support are becoming less compelling as
differentiators from the resilience of the hardware, OS and firmware. In addition, many large Internet
data centers (DCs) have their own technical skills, such as integration, and want to minimize costs.
The implications are that large user organizations will approach their infrastructure differently than
the traditional commercial DCs, and their relationships with suppliers will also significantly change.
Given the different priorities and skills of this area, the drivers and limitations are likely to be as
much about the business challenges and opportunities as they are about technology factors.

Gartner, Inc. | G00251107 Page 21 of 93

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

The overall effect could produce profound market and development changes in the long term: If a
critical mass of data center customers can work together effectively to collaborate on development
and design, this new approach potentially limits the innovation driven by the hardware vendors.
Hardware vendor differentiation will shrink and limit overall profits, which could result in lower overall
R&D investments. Manufacturing remains important, but sourcing could shift, and it may become
increasingly effective for big IT data centers to go to a contract manufacturer or original design
manufacturer, rather than a traditional supplier. Thus, other suppliers gain share against the
traditional vendors and possibly force some vendors out of the x86 market.

We expect some of these same forces will occur in other similar endeavors, such as Open Vault for
storage. Although a different organization, the Open Networking Foundation could also drive related
shifts in the networking area.

User Advice: Short term, the benefits of Open Compute are mainly for hyperscale data centers,
with large, uniform server installations. Longer term, there could be broader relevance. For example,
if Open Vault and Open Rack are adopted, the impact will be felt by all organizations eventually.

Because Open Compute is part of a holistic approach, including data center design and strategy, IT
leaders with large, scale-out infrastructure should track, participate or correspond with the
organization and vendor participants of Open Compute to monitor benefits to their IT plans. These
benefits cannot only be technology costs through standardization and lower costs, but also provide
additional leverage in vendor negotiations.

There is a high amount of activity here from the nontraditional server vendors, so enterprise users
with large-scale requirements may want to start evaluating these alternative vendors.

Business Impact: Organizations with a large, highly scaled-out infrastructure are those that stand
to benefit from the Open Compute Project's initiatives. Large-scale Internet companies, for whom
the data center infrastructure and operations are the central part of their cost of doing business, are
the key target market.

Although the large Internet companies are currently the main focus, related buying centers do exist
in related companies. Large Web hosters or communications service providers may also benefit
from this approach.

As Webscale IT potentially expands, making these architectural approaches increasingly viable for
mainstream IT environments, traditional enterprises may increasingly see relevance in some of their
own buying centers. This may particularly be the case in environments where large organizations
have their own sets of scaled-out infrastructure, such as the online arm of a large retailer or a high-
performance computing (HPC)-like environment.

Benefit Rating: High

Market Penetration: 1% to 5% of target audience

Maturity: Emerging

Sample Vendors: AMD; Dell; Facebook; HP; Intel; QLogic; Quanta Computer; Rackspace

Page 22 of 93 Gartner, Inc. | G00251107

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

At the Peak

Appliances
Analysis By: Philip Dawson

Definition: "Appliances" is a generic term that encapsulates many aspects of integrated


infrastructure systems and related solutions cutting across data center, PC and software delivery as
integrated software stack systems. It can have many meanings and interpretations from the industry
and vendors with related market initiatives and derivatives. However, appliances are more than just
IT bundles with marketing; they offer joint technology, hardware, software and management
integration.

Position and Adoption Speed Justification: While appliances continue to integrate infrastructure
components and software platforms — or stacks — into packaged (integrated, efficient, managed,
cost-optimized) alternatives, they can be difficult to integrate and upgrade. Appliances also align
asset cycles of separate components, which accelerates redundancy and obsolescence.
Appliances are expected to continue to fragment under definition. As the market matures,
standards will only develop within the appliance for a specific role or function, not across multiple
appliances in the IT portfolio. Appliances on the Hype Cycle could, therefore, fork, or strands and
branches of appliances could become obsolete within a couple of Hype Cycle iterations.

Initially, appliances were single-function systems, but they have evolved to include multiple-function
single stacks, as well as software appliances — a single-function workload on a virtual machine
(VM), functioning as a separate "bubble." Single-function software appliances used for security are
gaining ground, but general-purpose VM appliances and software packaging remain vague and
lagging the adoption of physical appliances (even from Oracle and Microsoft). However, Oracle has
gained continued momentum with its Engineered Systems approach — especially with its "Exa"
products. The attraction here is not only integrated infrastructure, but also integrated platforms,
including Oracle software and licenses tied to the infrastructure.

Appliances are now also being promised and initially delivered by vendors like IBM PureSystems
and SAP Hana-based solutions, and there's increased activity from Microsoft and VMware with
software appliances. In balance, clients become fully reliant on the (primary) vendor as appliance
stacks become more complex. Adoption of appliances can lead to increased vendor lock-in and
can negatively affect price negotiation and contract leverage issues.

User Advice: When considering the use of appliances, focus on how they are integrated and
integratable, as well as integrated with other appliances and/or more-traditional infrastructure
systems and solutions. Clients should look at how to manage and upgrade appliances alongside
traditional IT infrastructure investments and skills/operations. Appliances are difficult to assess for
total cost of ownership (TCO) and increased lock-in, with not only traditional virtualized
infrastructure and platforms, but other appliances as well. This leads to many apples-to-oranges
comparisons, often exaggerated with vendor hype and unfair TCO comparisons.

Gartner, Inc. | G00251107 Page 23 of 93

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

Business Impact: Collectively, appliances could take up to 5% to 10% of infrastructure, platform


and related software revenue. However, this collective is due to many small tactical gains in
separate markets (e.g., server storage, app servers, database management systems [DBMSs],
management and security), and cross-pollination and strategic growth will only be plausible if
vendors such as Oracle, IBM, SAP and partners use an appliance or integrated system model
instead of traditional siloed system sales, and allow appliances to not only complement, but also
substitute, existing channels and revenue streams.

Benefit Rating: Low

Market Penetration: 1% to 5% of target audience

Maturity: Emerging

Sample Vendors: Azul Systems; HP; IBM (Netezza); Oracle; SAP; Teradata

Recommended Reading: "How to Evaluate the Value Proposition of Vendors' Appliances"

"How IT Departments Must Change to Exploit Different Types of Appliances"

High Temperature Servers


Analysis By: Rakesh Kumar; Steve Ohr

Definition: High temperature servers include IT equipment that is intended to function at higher
ambient temperatures than the common 21 degrees Celsius, as well as some at 27 C, others at 40
C, and some specially qualified equipment at 45 C, as specified by the American Society of Heating,
Refrigerating and Air Conditioning Engineers (ASHRAE) classifications A2, A3 and A4, respectively.

Position and Adoption Speed Justification: High temperature servers assume that substantial
savings in electricity can be achieved by reducing the operation of facilities' cooling fans and
chillers, and letting the IT equipment run hot. Legacy equipment was specified to operate at 21 C.
Some current-generation servers are running at an ambient temperature of 27 C (81 degrees
Fahrenheit). For those concerned about the possibility of equipment failure at elevated
temperatures, Gartner has recommended 27 C as the target operating point for bringing up new
equipment. But there is evidence suggesting servers can run much hotter without inducing
component failures.

Observing that some IT equipment can run as hot as 50 C, ASHRAE's 2011 thermal guidelines for
data processing environments extended operating points for high ambient temperature equipment.
ASHRAE's A3 recommendation allows equipment to run up to 40 C (104 F), while the ASHRAE A4
specification describes operation at 45 C (113 F). Server hardware builders like Dell and IBM, along
with many semiconductor makers, are joining the chorus advocating higher ambient temperatures
for data centers.

User concerns revolve around the possibility of equipment failure that may be related to high
ambient temperature. Despite commercial semiconductor devices being specified and tested for
operation up to 70 C, data center managers — always nervous about uptime — will be hesitant to

Page 24 of 93 Gartner, Inc. | G00251107

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

raise the ambient temperature of their equipment racks beyond a few degrees. Any temperature-
related equipment failures would support "I told you so" arguments against implementation of
ASHRAE A3 or A4. The cost of cooling and air conditioning would be easier to bear than the cost of
losing servers during a critical application. Additionally, high ambient temperature-specified
equipment racks will initially share floor space with equipment capable of high ambient operation,
but not specified as such. Data center managers will hesitate to turn down fans and air conditioners
— depriving high ambient temperature equipment of its advantages. Thus, it could take as much as
five years for high temperature servers to penetrate the market, and most of the new machinery
would operate well below 40 C.

User Advice: An ambient 30 C exceeds the current ASHRAE recommendation for A2 equipment
(which is the most common). Gartner has already advised clients interested in saving energy costs
with high ambient temperature to target 27 C as the optimal operating point, and to proceed with
caution. First, ensure that the servers they buy fall into the ASHRAE A3 and A4 point — as
stipulated by the vendor. Next, segment those high ambient temperature servers into hot or cold
containment areas. Bring up individual equipment racks on a schedule, and wait three months
before each addition.

Use computational fluid dynamics or data center infrastructure management analysis tools to
identity hot spots and visualize hot-aisle/cold-aisle transfers. Those concerned about device failures
are encouraged to be careful to distinguish temperature-related failures from other failure modes.

Business Impact: Cooling costs will make up as much as 40% to 60% of a data center's electricity
costs. As part of the ongoing effort to reduce energy consumption, data center managers, as well
as server technologists, are starting to lobby for higher-temperature operation. Turning off the
room-size fans and chillers, they suggest, can save as much as 4% of current data center operating
costs for every degree Celsius the servers' ambient temperature increases. For a midsize data
center using 935 kW, for example, the per-degree savings (assuming 10 cents per kilowatt-hour and
50% loading) could be as much as $16,380 per year.

Benefit Rating: High

Market Penetration: 1% to 5% of target audience

Maturity: Emerging

Sample Vendors: Dell PowerEdge; IBM; Intel

Recommended Reading: "How to Run the Data Center Effectively at High Temperatures"

Servers Using Flash Memory as Additional Memory Type


Analysis By: Carl Claunch

Definition: Flash is a unique processor memory technology, offering the potential for large
configurations of nonvolatile locations for data that delivers near-RAM speed. Flash often
masquerades as a solid-state drive (SSD), appearing as a disk device to the file systems, but this

Gartner, Inc. | G00251107 Page 25 of 93

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

analysis covers flash use only as directly addressable locations, not for read/write-style file system
requests. This new memory type can unlock powerful new capabilities and deliver dramatic
additional performance for systems and applications.

Position and Adoption Speed Justification: In most cases, flash memory is still being packaged
to appear to be a rotating disk drive. This allows it to be easily and transparently added to a system,
but requires that it be managed as one or more disk volumes, usually with file systems installed.
Because this new type of storage is considerably more expensive than a rotating disk, any data
placed in it should be carefully selected to deliver maximum performance value, given the cost per
byte. Accessing this as a disk drive means that the precious space is used for the volume overhead,
file system and file overhead. A file is either entirely on the drive or not on the drive, although only
certain fields or records in a file may deliver the performance benefit.

Use of additional RAM to speed performance through improving the effective access speed of disks
is also a crude tool, as the disk cache and the pages sitting in physical RAM are selected based on
generic, least-recently-used kinds of algorithms, not insight into the performance criticality and
intended use of particular data. One cannot simply point to parts of files and force them to be kept
in RAM, nor does that deliver all the performance of flash, as updates on RAM, being on a volatile
device must drive physical input/output (I/O) to backing disks in order to protect the changed data,
while flash is fully persistent.

To exploit the value of flash as a new type of memory, the server must implement space for the
flash chips and expand its architecture to provide access. No standard yet exists for this, which
hampers the speed of adoption. The operating systems must be aware of the new category of
server memory; provide some programmatic interface to allow other parts of the operating system,
middleware and applications to exploit this space; and develop an arbitration method for the
eventuality that running software will cumulatively request more space than is physically configured
on the server.

Middleware that has key data elements for which movement into flash memory could improve
performance must be able to find and make use of the memory. Similarly, an application that has
limited data that would benefit from the speed and persistence of flash would need to enable that
data in its code.

For these reasons, the benefits and buzz around this technology are still building, and maturity is
quite low. Yet, the potential is apparent, even from the exploitation of flash in its cruder SSD form.
Thus, the competitive advantages for software will be the main drivers pushing those makers to
exploit flash as a unique memory type. Once flash memory has reached the tipping point where a
consistent access interface exists, multiple server vendors provide it as an option, and the key
Windows and Linux operating systems support it, adoption will accelerate. Look for this adoption to
take off in about two years.

There are a few examples of early use of this strategy. For example, Oracle Exadata v.2 and
Exadata Smart Flash Cache implement software that is aware of flash as a unique nondisk, non-
RAM level of memory to attain very large performance gains. Fusion-io offers a software
development kit (ioMemory SDK) with application programming interfaces that enables memory

Page 26 of 93 Gartner, Inc. | G00251107

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

style access and unlocks unique capabilities that are unavailable with a traditional solid state
storage approach.

User Advice: Companies facing serious performance challenges with existing systems should look
to the potential of flash memory to resolve these problems, but recognize that limited support exists
for flash as an explicit server memory type. SSDs based on flash memory may provide an adequate
performance boost at an acceptable price for some companies, even if it's not as optimal as explicit
flash memory.

When middleware, applications and server operating systems that support this technology are in
use, use a test server to evaluate potential use cases. Update configuration planning to include
explicit flash memory with the other elements (such as processor numbers and speeds, RAM size,
and I/O types) used to specify the server.

Business Impact: Improving the performance of applications or systems would be more successful
and cost-effective if the selection of the data to place in flash memory can be done in an intelligent
and application-aware way. This would allow the targeting of just the minimum amount of data
needed to yield the desired higher performance, rather than forcing entire files, file systems or disk
volumes to sit on one type of device or the other with concomitant higher costs. Alternatively, flash
memory can reduce the number of more-expensive processors or the amount of RAM that would
otherwise be needed. It enables the use of slower, less-expensive processors. Why shrink the
computing part of a transaction excessively with overly fast processors when a reduction in delays
waiting for data can equally improve response times? When a minor investment in flash memory as
a server option can noticeably improve the behavior of important middleware or systems, it can also
reduce delays for the end users of the systems, improving business productivity. Reduced delays in
meeting customers' needs will contribute to an increase in their overall satisfaction.

Benefit Rating: High

Market Penetration: Less than 1% of target audience

Maturity: Embryonic

Sample Vendors: Adobe; Bull; Fusion-io; HP; IBM; Microsoft; Oracle; Violin Memory

Recommended Reading:

"Oracle Exadata Database Machine Deployments Meet Expectations and Drive Interest"

Extreme Low-Energy Servers


Analysis By: Carl Claunch

Definition: Low-energy servers are systems constructed around processor types, e.g., ARM
processors, that were originally designed for very low-power environments, typically in devices like
smartphones or in an object with a processor embedded inside. Low-energy server systems have

Gartner, Inc. | G00251107 Page 27 of 93

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

total system energy requirements running from about 5 watts per core at the high end down to
fractions of a watt per core.

Position and Adoption Speed Justification: Prospective server vendors are ratcheting up their PR
efforts, and some are actively engaging with potential customers or supporting early customers.
Many in the press are intrigued by dramatic speculation that this trend might disrupt Intel and
AMD's dominance of the server market. This scenario speaks of a major shift in the market toward
new low-energy server designs, and also postulates the disruption of the industry power balance,
possibly replacing the big providers of chips and servers with a new slate of suppliers.

As speculative and extreme as this may be, clients with major investments in x86 vendors and
products want to understand to what extent even some of this scenario may come to pass,
because they will need to understand whether their existing server software will have to be
converted to run on this new category of servers, which the scenario posits are predominantly non-
x86 machines. Ecosystem readiness is mixed; for example, Microsoft Windows Server does not yet
support non-x86 based systems while Linux distributions have some support.

The processor types used in this emerging category of low-energy servers currently include ARM
from various semiconductor firms, Intel's Atom and Tilera's TILE-Gx; however, many other
processor architectures (such as MIPS, SPARC, SuperH, 68k and PowerPC) potentially could be
adopted to build extreme low-energy servers. These processors are usually built as a system-on-
chip (SoC), which integrates circuitry on one chip alongside the processors that are implemented by
additional, separate support chips in traditional PC and server products. The SoC leverages its
included support logic across several cores on the silicon die to further drive down power use.

These systems are quite different from systems using low-power models of the processor types
traditionally used in servers and PCs (for example, as with the Intel Xeon family, where the lowest-
power versions currently scale down to about 10 watts per core; note that this figure does not
include the power required for the separate support chips needed on traditional servers, thus the
wattage of a processor chip and the wattage of net power per core of the low-energy systems
should not be directly compared).

Significant announcements this year included HP's Project Moonshot updates, sample deliveries of
64-bit ARM technology such as Applied Microsystems' X-Gene, reflecting the growing activity
around this emerging technology and increasingly public activity by other semiconductor makers,
such as Texas Instruments, targeting this emerging category.

Clients want to know if existing systems might be made prematurely obsolete, and if they will need
to change strategies for their infrastructures. However, the reality is that these lower-energy server
alternatives are suitable for a specific and narrow set of cases (essentially, workloads that have
quite light processor requirements in relation to their memory and input/output needs and, critically,
are able to nearly linearly scale in parallel across many more processors than are used running on
traditional x86 servers). The market impact of these alternative servers will be slow to grow and
quite modest for some time to come, mostly installed in large Web-based data centers initially, in
turn, limiting the number of users of existing servers who will be impacted.

Page 28 of 93 Gartner, Inc. | G00251107

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

User Advice: Clients for whom all the following are true should look further at low-energy servers
now:

■ A major share of the work running in the data center has the light CPU requirements that suit
these designs.
■ The capabilities and culture of your organization are compatible with undertaking leading-edge
or even bleeding-edge IT projects.
■ The anticipated result of deploying on low-energy servers displaces or avoids purchase of
substantial numbers of traditional servers.
■ The resources, spare talent and management attention exist to take on a demanding new
project, along with the ability to rapidly assess feasibility and costs.
■ The code is portable to this platform via recompiling; installing a targeted, alternate version from
the software provider; or running under a provided interpreter (e.g., Java or scripts).
■ The product is sold and adequately supported in the area where it will be deployed.

Clients who face severe constraints on growth or profitable operation of their businesses (but such
constraints might be resolved by a dramatic reduction in energy costs or a dramatic increase in
capacity inside the same data center envelope) should undertake a quick assessment of the
challenges and effort involved to deploy low-energy servers. They should compare those costs,
risks and challenges with the feasible alternatives that could address the constraints they face.
Further investigation and development of a project plan is warranted only when all alternatives are
equally or more risky, equally or more costly to implement, and equally or more demanding of
internal expertise; if not, then the risks are out of balance with the potential rewards and the other,
more-well-understood alternatives should be pursued.

Clients who do not match either of the sets of conditions noted above should periodically revisit the
low-energy server trend to judge when its maturity reaches a point that would justify a deeper
study. For most of Gartner's clients, the time for such study is not now.

Business Impact: For those enterprises with a good fit, technically and organizationally, these
alternative servers may provide substantial relief in energy costs to those burdened by large energy
expenditures. Similarly, these servers can support considerably more capacity in an existing data
center space, avoiding the need for construction or relocation to other data centers — but, again,
only if one's situation is a very good fit to this alternative server approach.

Users with a perfect fit for these news server types may also realize a more green outcome, in terms
of the energy required to support their businesses, than with traditional, less energy-frugal
machines, but this will only materialize if the actual net energy used for the user's real workload is
lower than that achievable running on the most efficient traditional server deployments.

For the remainder of potential users of these servers, or those who can use them for only a small
part of their total IT needs, the benefits will be, at most, a modest improvement in cost and a token
improvement in ecological factors.

Gartner, Inc. | G00251107 Page 29 of 93

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

Benefit Rating: High

Market Penetration: Less than 1% of target audience

Maturity: Emerging

Sample Vendors: AMD; Calxeda; Dell; HP; Nvidia; Quanta Technology; Super Micro Computer
(Supermicro)

Recommended Reading: "Cool Vendors in High-Performance Computing and Extreme-Low-


Energy Servers, 2012"

"Extreme Low-Energy Servers Aren't Built Only With ARM and Atom"

Heterogeneous Architectures
Analysis By: Carl Claunch

Definition: This computing system architecture has processors that use more than one instruction
set, all of which share a single memory. This requires programs to be written differently for each of
the dissimilar instruction sets. The goal is to offer substantially better performance or cost by
devoting the appropriate parts of the application to machine designs optimized for specific types of
computing.

Position and Adoption Speed Justification: This concept has been established in specialized,
embedded computing systems. It is used in areas such as telecommunications and graphics
processing, where separate machine architectures provide substantial benefits to relevant parts of
the overall task. This yields higher overall performance than if a homogeneous system were
designed.

The operating system must manage the collection of machine elements as a single system while
dispatching the independent queues of work written to each instruction set. Heterogeneous
architectures are available in partial implementations that use graphics cards (general-purpose
GPUs [GPGPUs] or GPU computing), field-programmable gate arrays (FPGAs) and other
components with direct-access drivers, in lieu of full support from the operating system. The
processors can all be targeted to run parts of an application by creating appropriate code. If the
processors are restricted to specific functions, such as XML processing, then they are covered in
the profile of offload engines instead.

The market for high-performance computing (HPC) has extreme-scale requirements, where the
notion of a heterogeneous design could reach well above the levels expected from more traditional
alternatives. Vendors and users in the HPC market are undertaking early work to produce a general-
purpose, heterogeneous server, with activity high from both startup and established vendors. The
number of vendors building heterogeneous models, the number of client inquiries related to
heterogeneous systems, and the prominence of such designs at industry or tradeshows have all
increased since last year. Server makers (such as Cray, SGI, HP, Dell, IBM, Bull and Supermicro)
have offered integration of graphics cards into servers for the HPC market. Activity is high, with

Page 30 of 93 Gartner, Inc. | G00251107

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

graphics cards from Nvidia and AMD, and more choices are expected as a result of advances by
the processor makers that integrate heterogeneous processors on a single silicon chip.

User Advice: In the near term, users with extreme requirements, those that are not satisfied with
the expected future capabilities of homogeneous systems and those that can justify the risk and
investment required to experiment with heterogeneous systems should consider projects based on
this technology. Others should keep an eye on progress in this area, looking more deeply at the
concept once it has moved to a more mature state and is further along the Hype Cycle. Systems
with mixed architectures are likely to cost more than monolithic x86 servers. The benefits from
increased performance or lower total numbers of servers should be enough to offset the added per-
system cost, as well as the added complexity of dealing with heterogeneous machines and
software.

Business Impact: Heterogeneous architectures may offer previously unattainable levels of


performance for HPC-like workloads. However, they will eventually become a mainstream design
for servers in the general commercial market.

Benefit Rating: High

Market Penetration: Less than 1% of target audience

Maturity: Emerging

Sample Vendors: AMD; Cray; HP; IBM; Intel; Nvidia; SGI

Recommended Reading: "Heterogeneous Systems Are In Your Future"

Virtual Machine Resilience


Analysis By: Carl Claunch

Definition: Virtual machine (VM) resilience evolves from the current capabilities of server
virtualization products. The target server VM is continuously kept as an up-to-date copy of the
running VM on the source and is always an instant away from migrating. If a failure occurs on the
source server, the VM keeps running, but now on the target server. The power of this idea is that
VMs can be set to almost any degree of high availability, using a succession of techniques in the
same virtualization management software, up to and including fault tolerance.

Position and Adoption Speed Justification: This virtualization capability is somewhere among
investigation, research project, early development and shipping, but the exact status differs across
the particular virtualization product makers. The idea is well-established, which constituted the
trigger. This appears in additional beta versions, road maps of other products and in proof-of-
concept demonstrations by other providers, VM resilience technology is nearly at the Peak of
Inflated Expectations. Because substantial hard engineering has to occur to make VM resilience
widespread and complete, we evaluate it as an emerging technology. Given this capability's
potential to differentiate virtualization products from those that have not yet delivered the
functionality, strong competitive pressures will drive this functionality into many vendors' plans.

Gartner, Inc. | G00251107 Page 31 of 93

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

During the movement process of existing live migration capabilities, there is a moment when the
target server establishes an exact copy of the VM from the source server, just before the logical
switch is thrown. The running VM finishes some instruction on the source server, then the next
executed instruction will be on the target server, and the source server VM vanishes. At relatively
low availability levels, failure of a server machine would cause the loss of the VM running on it; but
when the failure is detected, recovery is accomplished by starting up a new instance of the VM.
Increasing the desired availability level can activate the semi-perpetual live migration, thus allowing
VMs to continue running through a "last second" migration. The key to this success is that all the
prep work for a migration is done in advance, keeping the VMs poised forever at the instant before
migration completes. Instead of using multiple incompatible mechanisms for different levels of
availability, perhaps in implementing a failover clustering software solution for one level, and
purchasing more redundant and more-expensive physical servers for a different level, the IT
organization will have a single product providing all levels. Training is simplified, and VMs can be
shifted rapidly to different availability guarantees. At the end of the month, work on a VM might
become extremely critical, but doesn't ordinarily warrant a traditional pair of machines with cluster
software. With this technique, a change of a parameter can activate a new level of availability almost
immediately, with little effort and no conversion project. As with any use of live migration,
requirements exist for the source and target servers for access to the same data and network
address range.

Extremely high levels of availability will likely impose noticeable overhead, ensuring the source and
target copies of the VM are kept virtually in lockstep. This would slow down the workload,
compared with it running under a lower level of availability. Most products will institute this in a
stepwise fashion from lower to higher-availability guarantees as they develop, refine and prove the
technology at each step.

User Advice: Update your strategy for supporting availability needs in light of the pending arrival of
VM resilience. Determine whether and when to migrate away from alternative mechanisms, such as
failover cluster software or high-availability server hardware.

When your virtualization product provider makes this failover capability available and, if you are
contemplating its use, then first run an extensive pilot. Verify the behavior, operational implications
and compatibility of the new mechanism with the workloads and VMs that may use it.

Business Impact: Virtualization simplifies the delivery of HA by leveraging a single mechanism that
manages all tiers of availability. It lowers the complexity of the operational environment. Since errors
stemming from the complexity of failover cluster systems often can cause as much downtime as the
redundancy otherwise prevented, moving to a single and simple-to-administer mechanism could
improve the net availability experienced by the users of the IT systems. The ability to change
availability levels rapidly and frequently may better align the delivered uptime with the needs of the
business while keeping IT expenses in check. This functionality offers a way to step into HA with
less investment and impact than more traditional HA solutions or more robust hardware. Also, this
capability can be added to HA/DR systems, giving a live migration capability to suit a subset of the
outages that might be encountered.

Benefit Rating: Moderate

Page 32 of 93 Gartner, Inc. | G00251107

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

Market Penetration: 1% to 5% of target audience

Maturity: Emerging

Sample Vendors: Citrix; HP; IBM; Microsoft; Oracle; Stratus Technologies; VMware

Server Digital Power Module Management


Analysis By: Jeffrey Hewitt; Steve Ohr

Definition: Server digital power module management supplements an analog voltage regulator and
puts server power supplies (or many parts of the power distribution system) under a microcontroller
— and, ultimately, software monitoring and control. Digital power management allows multiple
power supply lines to be adjusted for maximum energy transfer efficiency and, more significantly,
enables two-way communication between power sources (such as utility companies) and power
consumers (such as computer power supplies and cooling systems).

Position and Adoption Speed Justification: The ability to gain visibility into the power
consumption of various server components and to increase server power efficiencies are two main
advantages of server digital power module management. In line with these advantages, digital
server power management increases costs, reduces voltage regulation response time, and makes
only fractional improvements in overall regulator efficiency. Despite these drawbacks, digital power
module management continues to grow in hardware platforms with complex power management
systems that will benefit from software monitoring and control and are relatively insensitive to costs
when compared with less-expensive devices. These include large computer systems such as
servers, communications switching stations and routers, energy management systems for smart
buildings, and some high-end consumer appliances.

Digital power module management in servers is actually becoming more common on server
motherboards and rack-mounted power supplies. The implementation of digital power module
management does not necessarily track the cost of electricity. The cost of implementing a digitally
controlled power architecture and its ROI, in terms of reduced electricity bills, are realized in
different parts of the organization. The data center manager may find the energy savings from the
use of natural airflow between equipment racks more substantial than any added efficiencies at the
server motherboard level. Conversely, the IT manager, concerned with boosting compute capability
without increasing costs, may fail to appreciate the incremental advantages of a digital power
module management scheme — one that may raise energy transfer efficiency 1% or 2%, while
adding several dollars to the cost of a server.

Rather, the use of digital power control modules with a Power Management Bus (PMBus) or I2C
serial interface coincides with a data center management movement toward getting greater levels of
granularity in power consumption monitoring. The adoption of power usage effectiveness
measurement and reporting went a long way toward raising the consciousness of IT equipment
managers about how much power was being consumed and which parts of the system were
consuming it. The use of digital power modules with digital interfaces allows equipment managers

Gartner, Inc. | G00251107 Page 33 of 93

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

to get separate reports on power consumption from different components of a server — the CPU,
memory banks and high-speed input/output (I/O).

While data center infrastructure management (DCIM) software tools can enable the visualization of
data center hot spots, there is still a bit of controversy as to how granular the reporting and control
needs to be. The DCIM tools can identify (with color coding) hot spots among the racks, and enable
load shifting between IT equipment racks. The visualization, some suggest, only needs to be at the
rack level. Where granularity extends to the server board level, many DCIM toolmakers offer their
own resource tagging mechanisms (such as RFID tags), which identify a computing resource and
report on its condition. Looking at the heat generated and power consumed by a given server
component, these suppliers might argue, would be overkill.

However, a number semiconductor and power module manufacturers — Texas Instruments, Maxim
Integrated Circuits, Intersil and Exar — are reporting increasing acceptance of their digitally
controlled voltage regulators on servers. Sales of these components have moved beyond prototype
consumption and pilot production.

HP, as an example, offers a server power monitoring feature called Intelligent Power Discovery.
When a server is installed into a rack, it is immediately assigned an Internet Protocol (IP) address
through software tools, from which its overall power consumption can be monitored. HP hopes to
offer a greater degree of power monitoring granularity, with what it calls a Sea of Sensors
embedded on the server card. HP does not give any details on the embedded monitoring devices,
but its competitor, Dell, supplies servers with PMBus components on board. The incorporation of
these digital power module management elements helps push this technology forward along the
acceptance curve.

Still, acceptance of digital power module management devices is contingent upon a software
reporting infrastructure. While power chip makers, module makers and larger power-supply makers
are each attempting to supply graphical user interfaces as management tools, many of these tools
(describing phase and frequency relationships for data center pulse trains) are geared toward
engineers and power management experts, and not geared toward IT system managers and data
center facilities managers who must make daily decisions about computing and cooling resource
allocations. But the software increasingly links to DCIM visualization tools, which offer the IT system
manager and the data center facilities manager easy-to-understand representations of the power
being consumed by the servers under a variety of load conditions. One recommendation would be
to link the programming interfaces for digital power management modules to one of the growing
number of DCIM software tools that offer rack- and card-level views of power consumption under
various computing load conditions.

The current generation of digital module controllers offers engineers a view of electrical currents,
voltages and phase relationships between different power lines within the server. But this doesn't
yet tell the facilities manager what he needs to know about the costs of cooling and air conditioning,
power usage effectiveness ratings, or the efficiency with which power is consumed. There is also
some dissatisfaction with the current standard for the digital power module management pathway,
the PMBus, which is now endorsed by 43 semiconductor, module and embedded power supply
makers. Widespread PMBus adoption — coupled with a high-level user interface — will be the most
likely factor to push server digital power module management further along the Hype Cycle.

Page 34 of 93 Gartner, Inc. | G00251107

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

User Advice: Consider digital power module management when software tools (such as DCIM
tools) mature enough to allow semiconductor-level power reporting, and the cost of power justifies
the implementation and use of digital power module management solutions. The collective energy
savings from digital power management will not match improvements in cooling and air
conditioning, but they must be visible on the watt-meter scale — and not orders of magnitudes
apart.

Business Impact: Smaller installations of servers can benefit from the implementation of digital
power module management, despite the higher implementation costs. Larger data centers and data
center containers with racks of equipment installed are more likely to derive benefits from digital
power module management solutions because of the relatively higher return in those situations from
power cost savings — and the ability to read power consumption hot spots with the use of DCIM
visualization tools.

Benefit Rating: Moderate

Market Penetration: 5% to 20% of target audience

Maturity: Adolescent

Sample Vendors: Delta Electronics; Emerson Network Power; Exar; Intersil; Maxim Integrated
Products; Texas Instruments

Recommended Reading: "Emerging Technology Analysis: Digital Power Module Management in


Servers, Server Technologies, 2011"

"Competitive Landscape: Power Management IC and Power Semiconductor Vendors, 2012"

"Market Trends: Embedded Monitoring Provides Improved Handles for Measuring Data Center
Power Consumption"

Cloud-Based Grid Computing


Analysis By: Carl Claunch

Definition: Grid computing involves using computers in a public cloud service, or in a hybrid of
public cloud and internally owned computers, to collectively accomplish large tasks, such as
derivative risk analyses, candidate drug screenings and complex simulations. We do not include
grids that only use private cloud or traditional in-house servers; those are treated as grid computing
not using public cloud computers.

Position and Adoption Speed Justification: Grid computing using public cloud resources is an
extension of the general use of grids. Because of the new issues introduced by public cloud
services — including the lack of appropriate software licensing terms, challenges dealing with data
related to computations, security and privacy concerns, and deriving an adequate chargeback
model — cloud-based grid computing is considerably earlier on the Hype Cycle than its more-
mature incarnation, which runs wholly within enterprise and partner walls. Grid computing continues

Gartner, Inc. | G00251107 Page 35 of 93

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

to be interesting and closely watched by the high-performance computing (HPC) market, although
the pace of experimentation and productive use is still low. Grid computing was the most common
topic of discussion among HPC users in 2012, and is the basis of many Gartner client inquiries.

When grid computing was less mature and perceived as risky, those that moved forward tended to
have situations where the benefits were huge, warranting the risks of very early adoption. Now that
it is perceived as less risky and appeals to many more in the market, the bar to moving forward is
lower; the average benefit will be high, and a smattering of organizations will use this for
transformational gains.

User Advice: Conceptually, grid computing can be used in two ways. It can help lower the costs to
process a fixed amount of work; or, more importantly, it can offer a business advantage by
accomplishing what wasn't feasible with more-traditional approaches. Often, this means increasing
the accuracy of a model, producing results in an unprecedentedly short time, looking for
interactions earlier, reducing the time it takes to search libraries of compounds as drug candidates
or enabling new business models.

When a business advantage can be gained by scaling up computing-intensive or data-intensive


processing in parallel, add grid computing to the list of potential implementation approaches. When
the objective of sourcing the computing resources from a public cloud provider is to access
additional power that can't be justified in a traditional long-term acquisition model, add this as an
option. However, be wary of the many unique issues that arise in this deployment model. When the
objectives are mainly to reduce costs, compared with traditional sourcing of the computers for a
more fixed and long-term workload level, consider alternatives (such as an in-house grid using a
traditionally acquired computer) that are more mature and have fewer issues to overcome.

Public cloud resources offer the ability to dynamically scale to meet varying computing needs on
short notice, and often with a cost model that is appropriately short term, or perhaps charges based
only on usage. We provide a list of sample vendors that offers public cloud computing services
(such as Amazon Elastic Compute Cloud [Amazon EC2] or Microsoft Azure), or that sells software
that enables and supports access to public clouds or hybrids of public and private cloud machines.

Business Impact: Investment analysis, drug discovery, design simulation and verification, actuarial
modeling, crash simulation and extreme business intelligence tasks are areas in which grid
computing may provide a business advantage. The potential to deal with wide swings in compute
requirements or short-term projects using a cloud provider to deliver a reasonable cost structure is
the main reason cloud-based grid computing is soaring up the Hype Cycle.

Benefit Rating: High

Market Penetration: 1% to 5% of target audience

Maturity: Adolescent

Sample Vendors: Amazon; IBM; Microsoft; Penguin Computing; SGI

Page 36 of 93 Gartner, Inc. | G00251107

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

Processor Emulation
Analysis By: Andrew Butler; Philip Dawson; Mike Chuba

Definition: Processor emulation is a form of virtualization technology that allows software compiled
for one processor/operating system to run on a system with a different processor/operating system,
without any source code or binary changes. This is done by dynamically translating processor
instructions and operating system calls as an application is running.

Position and Adoption Speed Justification: Processor emulation is not new, and is typically
deployed to enable legacy applications written for older architectures, and to allow instruction sets
to be deployed on newer platforms. This capability is deployed through two very separate use
models. It can be used to create low-end models of a mainframe line that coexist with the native
architecture on higher-performance models, or as a technology replacement strategy for a product
line that is generally targeted as a competitive attack or migration target. It could also be used to
create high-end models of x86 platforms for workloads that have scaled beyond the confines of the
x86 architecture.

Non-IBM mainframe vendors like Fujitsu, Unisys and Bull have deployed the former strategy to help
sustain the viability of their mainframe strategies and provide their customers with smoother
offramps for eventual migration. Stromasys represents the most mature example of an independent
vendor that has established itself as the clear leader in Alpha and VAX emulation, with x86 and
Linux (or Windows) as supporting platform technologies. Stromasys is extending its coverage to
include the ability to emulate SPARC and Precision Architecture Reduced Instruction Set Computer
(PA-RISC) architectures, including a new focus on HP e3000 migration services. The company is
working closely with HP to accelerate acceptance of this strategy for HP e3000 users who want to
preserve their workloads.

The use of any emulator logically creates a new layer that software must negotiate, leading to some
performance degradation. However, the systems that are usually being replaced are generally old
and outpaced by the new hardware that has superseded them; therefore, application performance
of the migrated workload is rarely an issue.

User Advice: Software vendor certification and support is always the most significant challenge for
processor emulation, and organizations should validate that support or acknowledge the risk of
service interruptions when deploying processor emulation for packaged workloads.

Where applications are more portable (such as Java, C++ and database management systems
[DBMSs]), the business case for processor emulation is weaker. Similarly, where a set of legacy
workloads is so obsolete that the applications need imminent re-engineering, the argument for
maintaining the legacy workload experience is diminished.

Organizations also need to consider the potential impact of processor emulation on data structures.
For example, emulating a RISC workload on an x86 processor has traditionally required a shift from
big-endian (ordering individually addressable subunits like words, bytes or bits within a longer data
word stored in external memory) to little-endian data structures. This is a one-time effort, but still
has to be factored into migration planning. However, it is not an issue for all processor emulation

Gartner, Inc. | G00251107 Page 37 of 93

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

solutions. For example, when emulating a SPARC workload on an x86 processor, the Stromasys
emulator doesn't require conversion; rather, it adapts the original hardware architecture so that
writes and queries all execute correctly in their original data formats.

As legacy installed bases dwindle, it puts pressure on processor emulation vendors to expand their
addressable markets to attract new business. While much of one emulation technology can be
leveraged to create another, this forces the vendors to invest in new vendor relationships and
programs to gain trust and recognition among new server communities.

Business Impact: Legacy system users are usually conservative buyers who have resisted
arguments for application modernization until there is little choice but to proceed. Because legacy
systems are often running business-critical workloads, the onus falls on the processor emulation
vendor to demonstrate that a new-generation system will deliver comparable uptime and
performance to the system it replaces. The best option for most legacy applications is
modernization or migration to Java Platform, Enterprise Edition (Java EE) or .NET applications. For
legacy applications that cannot easily be modernized, processor emulation can be a good
consideration, but it should rarely be deployed as a long-term solution.

Processor emulation is best-seen as a convenient "bridge" technology that is typically used for a
short period — six to 12 months — before being replaced by a natively compiled version of the
application. The key benefit is that it separates the hardware transition and software recompilation
phases to make platform migrations easier for users to manage and control. Processor emulation is
particularly well-suited to legacy environments that have little (or no) exposure to independent
software vendor (ISV) certification challenges. In such situations, it becomes feasible to deploy the
emulation software as a long-term solution where in-house application expertise remains strong.
While still very much a niche market, the use of processor emulation is gradually accelerating as
data centers drive toward rationalization of hardware architectures and operating systems. This is
helping to build maturity, support and acceptance that vendors such as Stromasys are able to
exploit.

Benefit Rating: Moderate

Market Penetration: 1% to 5% of target audience

Maturity: Early mainstream

Sample Vendors: Bull; Fujitsu; HP; IBM; Intel; Stromasys; Unisys

Recommended Reading: "Cool Vendors in the Server Market, 2013"

Sliding Into the Trough

V2P Server Management


Analysis By: Philip Dawson; Andrew Butler

Page 38 of 93 Gartner, Inc. | G00251107

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

Definition: The capabilities, installed bases and maturity of virtualization/OS vendors managing
hardware — virtual to physical (V2P) — are different from those of hardware/system vendors
managing virtual deployments — physical to virtual (P2V). V2P server management, and the related
tools that are used to manage and monitor the hardware resources that are traditionally owned by
the server platform vendor, is handled by virtualization vendors.

Position and Adoption Speed Justification: Virtualization vendors with management tools have
been successful in consolidating multiple workloads onto well-managed, highly utilized and heavily
virtualized environments. This has extended well beyond consolidation efforts onto more agile
workload management, high-availability and related migration tools associated with software-
defined anything (SDx). However, virtualization and OS vendors are still unable to touch and fully
manage raw server hardware. Thus, server vendors can drive demand for discrete server
management tools that are largely platform-specific. As a result, V2P hardware management
capabilities will always lag behind those of P2V.

Gartner has seen continued use of holding a workload on a virtual machine (VM), which is parked at
low CPU utilization and low input/output (I/O) capabilities. As the workloads increase, on activation,
they are moved back to a physical server to do high I/O, as necessary. This has increased the need
for tools to manage P2V and V2P migrations and related tools, especially in high availability (HA)
and disaster recovery (DR) scenarios. The use of large workloads using single VMs, hypervisors and
physical hardware is gaining ground, which weakens the need for V2P migrations.

User Advice: Extend the use of virtualization tools beyond agility and workload migration
consolidation, especially in HA and DR cases, as a foundation for SDx. Do not attempt to manage
the physical hardware with virtualization tools alone. Monitor workloads that may require P2V tools,
as well as V2P capabilities. Consider the use of a single workload/VM and server as an alternative to
virtualizing and consolidating complex workloads.

Business Impact: VM, OS and hardware vendors should develop holistic tools and methodologies
that enable P2V and V2P to be managed alongside each other for end-to-end management.

Benefit Rating: High

Market Penetration: 20% to 50% of target audience

Maturity: Adolescent

Sample Vendors: Citrix; Microsoft; Oracle; VMware

Recommended Reading: "Use Sizing, Complexity and Availability Requirements to Determine


Which Workloads to Virtualize"

Fabric-Based Infrastructure
Analysis By: Andrew Butler; George J. Weiss; Donna Scott

Gartner, Inc. | G00251107 Page 39 of 93

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

Definition: Fabric-based infrastructure (FBI) is an example of an integrated system that is based on


a switched fabric for server-to-server, server-to-LAN and server-to-storage interoperability. FBI is a
natural architectural fit for IT organizations with a vision of a dynamically optimized data center. FBI
differs from fabric-based computing (FBC) as the technology elements are based on current
general-purpose server, storage and networking technologies.

Position and Adoption Speed Justification: The concept of fabrics has emerged during the past
three years as a means to converge separate storage, networking and server technology in the data
center. However, fabrics are really a path toward the highly modular data center platforms that will
emerge in the second half of this decade. "Integrated systems" is an umbrella term that
encompasses both FBI (which is always based on a switched interconnect) and various forms of
appliances. Any integrated system will always include elements of compute, storage and
networking. There are two forms of FBI. Integrated infrastructure systems (IIS) combine hardware
integration (server, storage and network) and management tools, plus (optionally) the operating
system and/or virtualization layer. An integrated stack system (ISS) is an IIS with the additional
integration of a partial or total software stack, turning the system into a quasi-appliance.

User interest in and demand for FBI is partly fueled by vendor enthusiasm and partly by the growing
number of workload scenarios, such as virtualization. This association of the workload to a business
function will change traditional IT from compute-centric toward user, department, consumer and
service orientation. However, many factors influence what "ideal" approach organizations should
take when planning to implement an FBI policy. Meanwhile, alternative architectural policies are
emerging that challenge the whole role of fabrics for some workload situations, leading to a
polarization of the market for legacy infrastructure modernization.

FBI solutions are increasingly touted by vendors as the logical step toward a data center
environment using virtualization management tools and where nearly everything is automated, thus
removing significant labor costs from service delivery. FBI can be considered a path to achieving
these goals, although further technology and market maturation are required. In making these
claims, vendors are seeking to change IT buying behavior (i.e., that purchasers will buy more
preintegrated infrastructure from fewer vendors). The reasons for this are twofold:

1. Infrastructure vendors have been losing influence to software vendors for years, and are
increasingly expanding strategies to boost competition. They also face the potential for cloud
services providers to paint the hardware as a throwaway, and as unimportant in influencing IT
organizations in moving hardware funds to software or service categories.

2. With margins under pressure, infrastructure vendors need to increase the degree of bundling to
drive margins and extend technology dependencies that will lead to repeat business. Most FBI
strategies are based on blade servers. By adding revenue and margin at the top end of the blade
server market, vendors attempt to mitigate the cannibalization of the low-end of the blade server
market by multinode servers.

Consequently, the early FBI market was driven more from a vendor perspective (versus a customer
pull model), as vendors sought to increase margins and take a larger share of the IT budget to meet
their growth goals in an era of constrained IT budgets. However, end-user appetites grew steadily

Page 40 of 93 Gartner, Inc. | G00251107

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

through 2012 and 2013 to date, as users increasingly believe that an FBI approach can help them
address IT modernization demands.

Early adopters of FBI have found value in a degree of physical infrastructure automation, enabling
them to resize resource pools, automate provisioning with little or no human intervention, and use
policy-based provisioning to run services in varying locations based on their data centers and
service provider strategies (encoded into the cloud solution). The automation layer, at a minimum,
focuses on automating infrastructure provisioning, including server, storage and networking
resources, which include Internet Protocol (IP) address automation and storage naming, with some
providers also focused on enabling hybrid cloud environments between their customers and service
providers, and provisioning the software stack above the hypervisor and operating environments.
However, it is not always necessary to deploy a fabric-based approach to achieve these benefits.

Gartner has published frameworks for evaluating vendors, maturity and cost variables (see "A 14-
Point Decision Framework for Evaluating Fabric-Based Infrastructure" and "Introducing the
Technology Dependency Matrix for Fabric-Based Infrastructure"). FBI typically requires or mandates
the use of certain vendor products — hardware and software — to gain the maximum benefits of
consolidation and cost reduction. Vendors will usually support the use of substituted third-party
technologies, but this will sometimes compromise the service-level quality that the vendor can
ensure when users buy the preferred technology combinations. Similarly, IT organizations often
favor a build-it-yourself approach for FBI, but this can impose significant system engineering and
integration costs.

User Advice: Allow vendors to deliver their narratives and demonstrate road maps, but only after a
specific architectural plan has been carefully thought out. Don't take it on faith that any vendor has
a complete FBI solution. FBI is an emerging market, where vendor promises are not always
matched by complete or proven solutions. Ensure that vendors provide their road maps and specify
which services are required to make necessary IT transformations or changes to the organization
and processes to lead to that vision of FBI. IT organizations that are early adopters of technology
should assess whether FBI can help them achieve their agility/cost goals, and whether they prefer
to use a specific vendor partner (which results in much greater lock-in with the vendor), or build
their own FBI using commercial off-the-shelf management software linked with underlying
infrastructure provisioning. If the organization chooses a specific vendor, it should secure long-term
discounts so that lock-in doesn't guarantee large price increases in two to five years. Use the fabric
maturity model (see "An Essential Fabric Maturity Model: Ignore at Your Own Risk") as a guide in
placing your IT organization on the optimum path to success.

Business Impact: The business impacts of FBI are the operational advantages provided by being
able to bring the infrastructure online more quickly, plus added agility to reassign resources faster
and more efficiently. Labor savings can be achieved — eventually — through automation of the
physical resource layer, but only if head count is reduced. FBI hardware elements are primarily
based on general-purpose server, storage and networking technologies. This can reinforce
hardware reuse, but also imposes limitations on choice. Once the foundation is automated, FBI can
be layered on top for additional value and benefit. As FBI matures, it will help your solutions work
not only with virtual resources, but also with physical resources and across many data centers.
However, the potential operating expenditure (opex) advantages come at the expense of premium-

Gartner, Inc. | G00251107 Page 41 of 93

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

priced solutions and increased vendor lock-in, along with social and cultural challenges, as data
centers revise their organization governance to fully utilize an integrated system approach.

Benefit Rating: Moderate

Market Penetration: 5% to 20% of target audience

Maturity: Early mainstream

Sample Vendors: Cisco; Dell; Egenera; Fujitsu; Hitachi Data Systems; HP; IBM; Oracle; VCE

Recommended Reading: "An Essential Fabric Maturity Model: Ignore at Your Own Risk"

"Clearing the Confusion About Fabric-Based Infrastructure: A Taxonomy"

"A 14-Point Decision Framework for Evaluating Fabric-Based Infrastructure"

"How to Assess the Impact of Fabric Strategies on Cost and Procurement"

"Fabric Computing: Take Control of Vendor Selection"

"Vendor Alliances Are a Critical Factor in Fabric-Based Infrastructure Selection"

High-Density Racks (>100 kW)


Analysis By: Philip Dawson

Definition: High-density servers require racks that draw 100 kW, or greater, of power. They use
dense server packaging, advanced internal cooling and highly efficient power distribution to
maximize performance per square foot.

Position and Adoption Speed Justification: Server racks peak at about 50 kW or 60 kW of power.
This power density, which has grown from 4 kW more than a decade ago, is the result of dense
blade or skinless server packaging and local cooling capacity for each server. Manufacturers will
continue to increase density through smaller form factors and more processors per module. Denser
servers offer lower management and operations costs per server and, potentially, improved
performance, because signal interconnects are shorter and a single-rack unit has more capacity.
These racks are likely to support internal optical interconnects and generally require liquid cooling,
changing the power consumption model and densities. Unusually fast advances in processor
performance and multicore technology have reduced the need to increase server density during the
past two years, a trend that continues and will delay the adoption of higher-power racks for some
deployments and workloads that only need consolidation. Different form factors (blades and
skinless servers), workloads (high-performance computing) and vertical industries (finance and
chemical/oil) will continue to push this profile with high-density, compute-bound platforms and
workloads.

User Advice: Plan for 100 kW racks when deciding what to use for cooling and power systems over
a project's life cycle. As denser racks emerge, anticipate substantial increases in computing
capability within a given footprint. By 2015, expect to see 100 kW racks. As density and

Page 42 of 93 Gartner, Inc. | G00251107

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

performance increase, expect to see servers deployed, managed and retired by the rack.
Replacement cycles are likely to move to within three years for performance-leveraged applications.

Business Impact: Continued advances in computing capability will drive systems to 100 kW racks
to maximize performance yield and lower management costs. We expect more than 100 kW racks
during 2015 to 2020.

Benefit Rating: High

Market Penetration: 5% to 20% of target audience

Maturity: Adolescent

Server Repurposing
Analysis By: Donna Scott

Definition: Server repurposing enables servers to be quickly reconfigured to run different software
stacks or images. It is used to share servers and enable higher utilization. For example, servers may
be repurposed from daytime to nighttime use, development/test for disaster recovery (DR) and
reconfiguration of a spare server when the main server fails. Server repurposing can generally
manage physical and virtual servers and enable switching between them. It can be initiated
manually or through a service governor.

Position and Adoption Speed Justification: Server repurposing is part of the server provisioning
and configuration management life cycle and is focused on server provisioning functionality. In
addition to provisioning software and/or images to a server, server repurposing can manage the
connectivity to storage (typically through maintaining address names) and virtual LANs (VLANs). For
physical servers, server repurposing functionality often uses the boot from shared storage by
connecting the server to an alternative image on a disk and managing the network connections. For
virtual servers, one is brought down and another brought up, typically in the same resource pool. A
conversion sometimes is needed — for example, from physical to virtual (often used for DR), virtual
to physical (such as to obtain vendor support where the vendor does not support virtual
environments), physical or virtual to cloud (when moving a workload or service to the public cloud
or a virtual private cloud) or virtual to virtual (such as where development and test uses one type of
hypervisor and production uses another). Server repurposing tools can be embedded in cloud
management platforms and test lab provisioning tools and offered by service providers to ease the
shift of services, workloads and virtual machines (VMs) from on-premises to off-premises.

Server repurposing is common in virtual environments but less common in physical environments
(where it is harder to implement). In physical environments, server repurposing can be achieved by
using imaging and configuration management technologies or by booting from shared storage. Or it
can be achieved with fabric-based infrastructure or blade server environments where server profiles
provide a degree of server mobility as well as flexibility to reconfigure servers and/or capacity.
Server repurposing is commonly used for DR architectures, where the servers in the secondary site
are shared (for example, with development, test and training). In a survey taken at the Gartner Data
Center Conference in December 2012, 19% of respondents indicated repurposing as the primary

Gartner, Inc. | G00251107 Page 43 of 93

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

method used for recovering mission-critical applications and 34% as the method for less critical
applications. When using repurposing for DR or when testing DR plans, servers are reconfigured to
look like production servers; often this is triggered through DR clustering and/or orchestration tools
that would maintain the priority and ordering of a recovery plan. Sharing and repurposing reduce
the amount of idle/standby infrastructure that is required for DR and thus reduce overall capital
costs. Server repurposing is a standard component of public or private cloud solutions that focuses
on frequent setup and teardown of an environment. However, maintaining changing images can be
a challenge and requires standardization and change/configuration management process maturity.
Due to trends in cloud computing and the desire to reduce service delivery costs, IT is implementing
a greater amount of standardization in software stack layers, which will spur more server
repurposing and become dynamically triggered (as in real-time infrastructure [RTI] architectures).

Server repurposing is advancing toward the Trough of Disillusionment because its implementation
requires much thought and policy development, which are difficult to execute. For example, if an IT
organization decides to implement shared DR services, where the DR site needs to be reconfigured
to look like production, it must decide how to implement the repurposing (for example, via VMs,
images or unattended installations of the software) as well as manage the failover and startup
orchestration while also considering the implications on software licensing. Often, tools are selected
based on the type of platform that is used. This is not as easy as just buying a tool and turning it on.
There is significant repurposing activity in development/test and other cloud computing use cases
where a significant amount of setup/teardown is required; thus, infrastructure reuse results in lower
capital costs. However, there are challenges in implementing standard infrastructure stacks that
enable fast setup and teardown. As cloud computing architectures become more common, the
market will see these barriers decreasing.

User Advice: Enterprises that are consolidating data centers, implementing private clouds and/or
RTIs, or implementing DR sites should consider server repurposing to increase asset utilization and
cut server capital costs. For high-availability architectures, server repurposing can reduce spare
server inventory and speed recovery. Enterprises with specific repurposing needs, such as
configuring servers for daytime versus nighttime use or provisioning servers quickly based on a
standard image (for example, for DR), should consider this technology but realize that they must
also orchestrate the timing and implementation of these changes.

Business Impact: The business impacts of server repurposing are higher asset utilization and lower
costs. For example, in a high-availability use case, an organization may go from having one spare
server for every server to one spare server for every eight or 10 servers. For a DR use case, where a
development/test environment can be repurposed to production, a significant amount of hardware
capacity can be saved but at the expense of increased risk and reduced flexibility in testing
(because testing and DR require that a shared environment get reduced capacity or be turned off).
Another benefit is speed while at the same time lowering costs because servers can be provisioned
and repurposed much faster. For example, a development environment can be converted to a
production environment through repurposing in hours at a much lower cost than having a
completely separate environment for DR. Ultimately, conversion speed and agility are affected by
the degree of standardization in an environment. IT organizations with significant standardization
will find dramatically lower operational costs and will realize that automation can drive repurposing.
IT organizations with little standardization will find that maintaining a diverse environment is difficult

Page 44 of 93 Gartner, Inc. | G00251107

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

to do and that it increases costs, thereby outstripping any savings that are gained from
implementing repurposing.

Benefit Rating: Moderate

Market Penetration: 20% to 50% of target audience

Maturity: Early mainstream

Sample Vendors: CA Technologies; Dell; Egenera; HP; NetIQ; rPath; Racemi; RiverMeadow
Software; Unisys; VMware

Recommended Reading: "Cool Vendors in Cloud Management, 2013"

"Balancing Software Licensing With High-Availability/Disaster-Recovery Requirements"

"Fabric-Based Infrastructure Enablers and Inhibitors Through the Lens of User Experiences"

VM Energy Management Tools


Analysis By: Rakesh Kumar; Philip Dawson

Definition: Virtual machine (VM) energy management tools enable users to understand and
measure the amount of energy that is used by each VM in a physical server. This will enable users
to control operational costs and associate energy consumption with the applications running in a
VM.

Position and Adoption Speed Justification: Data center infrastructure management (DCIM) tools
monitor and model the use of energy across the data center. Server energy management tools track
the energy consumed by physical servers. Although these tools are critical to show real-time
consumption of energy by IT and facilities components, the need to measure the energy consumed
at the VM level is also important. This will provide more granular management of overall data center
energy costs and will allow the association of energy across the physical to the virtual environment.
As the use of server virtualization increases, this association will become more important, as will be
the ability to associate the energy used by applications running the VM.

Some VM energy management tools come as part of server service consoles, such as HP Systems
Insight Manager (HP SIM) or IBM Systems Director. Users have to rely on the server (vendor)-
specific software tools. However, there are hardware-neutral tools, such as VMware Distributed
Power Management, which are designed to throttle down inactive VMs to reduce energy
consumption. Coupled with VMware Distributed Resource Scheduler, it can move workloads at
different times of the day to provide the most efficient energy consumption for a particular set of
VMs. Also, the core parking feature of Microsoft Hyper-V allows minimal use of cores for a given
application, even if multiple cores are predefined, keeping the nonessential cores in a suspend state
until needed, thus reducing energy costs.

Gartner, Inc. | G00251107 Page 45 of 93

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

VM energy management tools are maturing and will continue to improve during the next few years.
Moreover, uptake of these tools still remains low, because users are unclear about the ways in
which the products can provide short-term benefits.

User Advice:

■ Acquire energy management tools that report data power and energy consumption efficiency
according to power usage effectiveness (PUE) metrics as a measure of data center efficiency.
The Green Grid's PUE metric is increasingly becoming one of the standard ways to measure the
energy efficiency of a data center.
■ Evaluate and deploy VM energy management tools, and develop operational processes to
maximize the relationships among the applications running in VMs and the amount of energy
that VMs use. For example, VM energy management tools could be used for chargeback or as
the basis for application prioritization.
■ Deploy appropriate tools to measure energy consumption in data centers at a granular
infrastructure level. This includes information at the server, rack and overall site levels. Use this
information to manage data center capacity, including the floor space layout for new hardware,
and to manage costs through virtualization and consolidation programs.

Business Impact: VM energy management tools will provide better management of data center
operation costs, and more granular energy-based and application-specific chargeback.

Benefit Rating: High

Market Penetration: 5% to 20% of target audience

Maturity: Early mainstream

Sample Vendors: Emerson Network Power; HP; IBM; VMware

Advanced Server Energy Monitoring Tools


Analysis By: Rakesh Kumar

Definition: While data center infrastructure management (DCIM) tools monitor and model energy
use across the data center, server-based energy management software tools are specifically
designed to measure the energy use within server units. They are normally an enhancement to
existing server management tools, such as HP Systems Insight Manager (HP SIM) or IBM Systems
Director.

Position and Adoption Speed Justification: Energy consumption in individual data centers is
increasing rapidly, by 8% to 12% per year. The energy is used for powering IT systems (for
example, servers, storage and networking equipment) and the facility's components (for example,
air-conditioning systems, power distribution units and uninterruptible power supply systems). The
increase in energy consumption is driven by users installing more equipment, and by the increasing
power requirements of high-density server architectures.

Page 46 of 93 Gartner, Inc. | G00251107

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

These software tools are critical to gaining accurate and real-time measurements of the amount of
energy that a particular server is using. This information can then be fed into a reporting tool or a
broader DCIM toolset. The information will be an important trigger for the real-time changes that will
drive real-time infrastructure. For example, a change in energy consumption may drive a process to
move an application from one server to another. Server vendors have developed sophisticated
internal energy management tools during the past three years. The tools are vendor-specific and
often seen as a source of competitive advantage, but provide much the same information. The use
of that information in broader system management or DCIM tools generates enhanced user value.
For example, using the energy data to provide the metrics for energy-based chargeback is
beginning to resonate with users, but requires not just server-based energy management tools, but
also the use of chargeback tools.

Over the last year, most system vendors have enhanced their server-based tools. For example,
there are more out-of-the-box templates for energy dashboards (IBM and Cisco) and, in some
cases, more openness at the API level to allow easier integration with building management
systems and DCIM tools. For these reasons, the technology has moved further up the Hype Cycle.

User Advice: In general, start deploying appropriate tools to measure energy consumption in data
centers at a granular level. This includes information at the server, rack and overall site levels. Use
this information to manage data center capacity, including floor space layout of new hardware, and
costs through virtualization and consolidation programs. Acquire energy management tools that
report data power and energy consumption efficiency according to power usage effectiveness
(PUE) metrics as a measure of data center efficiency.

Specifically for servers, ensure that all new systems have sophisticated energy management
software tools built into the management console. Ensure that the functionality and maturity of
these tools are part of the selection process. We also advise users to give more credit to tools that
provide output in a standard fashion that is easily used by the DCIM products.

Business Impact: Server-based energy management software tools will evolve in functionality to
help companies proactively manage energy costs in data centers. They will continue to become
instrumental in managing the operational costs of hardware.

Benefit Rating: High

Market Penetration: 5% to 20% of target audience

Maturity: Adolescent

Sample Vendors: Cisco; Dell; HP; IBM

Linux on RISC
Analysis By: George J. Weiss

Gartner, Inc. | G00251107 Page 47 of 93

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

Definition: Linux on reduced instruction set computer (RISC) refers to the adoption and broad
acceptance of Linux on RISC architectures, primarily IBM's Power, although ARM may present
another Linux on RISC option in the future.

Position and Adoption Speed Justification: Demand continues to be low. Intel-compatible


technologies, such as Sandy Bridge (with Ivy Bridge to follow), have been appearing at regular
intervals and at faster availability rates than the new IBM Power models or other RISC technologies.
IBM has lowered the price of its Power systems entry models to compete more effectively on price
with x86; however, for most applications, Intel processors are good enough, leaving only niche
opportunities for Power. With Red Hat no longer supporting Itanium, and Oracle not actively
marketing Linux on SPARC, PowerLinux (i.e., Linux running on Power systems) is the only viable
choice for users seeking a more-performance-driven option for Linux applications. Nonetheless,
even positioned as a performance play, third-party independent software vendor (ISV) commitments
have been tepid, limiting users' interest. Without a viable ecosystem, it will be difficult for IBM to
create momentum, even with Red Hat and SUSE support, especially when a user's preference for
Linux was originally prompted by server platform choice.

User Advice: This strategy works best when performance gains are available on RISC, but are not
available from x86. Applications must be written or translated with good performance, and must be
supported by ISVs for Linux on RISC. Good management practices must be in place to manage OS
versions for multiple architectures. Users should have a clear strategic mission for Linux/RISC and
non-RISC deployments that make logical and coherent sense.

IBM has been the most successful in marketing its Power architecture for Linux, but it has, at most,
10% to 15% of Power users running Linux. Gartner is seeing virtually no interest from client
inquiries. We believe this initiative is partly an extension of IBM's strong marketing of Integrated
Facility for Linux (IFL) engines on System z, which enables IBM to carve out some x86/Linux
opportunities for IBM servers. In 2012, IBM launched another PowerLinux product initiative targeted
at markets such as analytics and big data, with an emphasis on virtualized, open-source
environments to develop new Power market opportunities.

The IBM initiative, now 12 months in promotion, is targeted to market niches that IBM feels have
significant growth potential and in which Power Systems excel in performance in comparison with
x86. Most x86 platforms now scale well into the range of Unix without RISC or Itanium hosts (e.g.,
Cisco, Dell, Fujitsu, HP, IBM, NEC and SGI). The prospect of an important market developing for
SPARC/Linux has faded and is nearly irrelevant. That leaves PowerLinux, together with zLinux, as
the only viable alternatives for users seeking a non-x86 Linux deployment strategy.

Business Impact: This technology can be important in consolidations using large, vertically
scalable servers with large memory configurations and hard partitions, a higher-performance
alternative to hosting Linux VMs on VMware and in mixed OS environments. Linux on RISC may
also be useful in multitiered applications in which Linux on x86 is the front-facing application to
larger systems of database management systems (DBMSs), such as IBM DB2, and compute and
Hadoop clusters (in IBM's vision) running on non-x86 servers, and when the applications are only
available for Linux, not for Unix.

Page 48 of 93 Gartner, Inc. | G00251107

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

Linux on RISC can provide additional benefits when a software stack such as WebSphere, which
already runs on RISC, can coexist and share the system resources with other Linux-based
applications.

Benefit Rating: Low

Market Penetration: 1% to 5% of target audience

Maturity: Adolescent

Sample Vendors: IBM

Recommended Reading: "HP Shifts to Offense With Its Mission-Critical x86 Server Blade Platform
Strategy"

"RISC/Unix Market Profile, 2012: A Three-Way Cage Fight"

Server Provisioning and Configuration Management


Analysis By: Ronni J. Colville; Donna Scott

Definition: Server provisioning and configuration management tools manage the software
configuration life cycle for physical and virtual servers. Some vendors offer functionality for the
entire life cycle; others offer specific point solutions in one or two areas. The main categories are
server provisioning (physical or virtual), application provisioning (binaries) and configuration
management, patching, inventory, and configuration compliance.

Position and Adoption Speed Justification: Server provisioning and configuration management
tools continue to expand their depth of function, as well as integrate with adjacent technologies,
such as IT process automation tools, and, most recently, cloud management platforms — for which
they often provide a core capability for initial provisioning and virtual machine provisioning.
Although the tools continue to progress, configuration policies, organizational structures (server
platform team silos) and processes inside the typical enterprise are causing them to struggle with
full life cycle adoption. As cloud initiatives progress and mature, there will be a "Day 2" requirement
for configuration hygiene to manage configuration compliance and patching, which will bring these
tools back into the picture.

Virtual server provisioning also offers another option — cloning or copying the VM and making
subsequent changes to personalize the clone (versus using a tool to manage the overall stack).
Initial private cloud initiatives were focused on infrastructure as a service (IaaS), in which thin and
standard OS images were provisioned with the VM, but then additional software was layered on top
via application provisioning and configuration management. More recent private cloud initiatives are
also including a focus on middleware and database provisioning (internal platform as a service
[PaaS]), which typically uses application provisioning and configuration management to provide the
software stack on top of the standard OS because this method reduces the image sprawl that
comes with the combinations and permutations of software stack builds. Most large enterprises
have adopted one or more of the vendors in this category with varying degrees of success and

Gartner, Inc. | G00251107 Page 49 of 93

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

deployment of the life cycle. As a result, we are seeing an increase in the uptick in adoption of
additional life cycle functionality by both midsize and large enterprises to solve specific problems
(e.g., multiplatform provisioning and compliance-driven audits, including improving patch
management).

Another new shift in the past year is the focus on DevOps, which appeals to some organizations
that want to build infrastructure via code, as they do with their application development teams.
Tools that were previously open source and have shifted to commercial are now becoming an
alternative to the "GUI-based" traditional tools in this category, especially for Linux. These tools
offer a programmatic approach to provisioning and configuring software on top of physical and
virtual servers, and some offer bare-metal initial provisioning (and some can also address
networking) with a scalable approach. Initially, they offered a different approach (pull versus push)
compared with the traditional vendors in this category, but of late, most have also added push
capability.

Cloud computing trends are encouraging standardization; for that reason, we believe penetration
and broader adoption of these tools will increase more rapidly in the next two to five years.
However, these tools are continuing to progress toward the Trough of Disillusionment — not so
much due to the tools, but the IT organizations' inability to standardize and use the tools broadly
across the groups supporting the entire software stack. There could be a rejuvenation of these tools
in two years as cloud adoption matures, and the subsequent need to manage inside the VM
becomes a renewed priority.

User Advice: With an increase in the frequency and number of changes to servers and applications,
IT organizations should emphasize the standardization of server stacks and processes to improve
and increase availability, as well as to succeed in using server provisioning and configuration
management tools for physical and virtual servers. Besides providing increased quality, these tools
can reduce the overall cost to manage and support patching and rapid deployments and VM policy
enforcement, as well as provide a mechanism to monitor and enforce compliance. Evaluation
criteria should include capabilities focused on multiplatform physical and provisioning, software
deployment and installation, and continued configuration for ongoing maintenance, as well as
auditing and reporting. The criteria should also include the capability to address the unique
requirements of virtual servers and VM guests. When IT standards have been put in place, we
recommend that organizations implement these tools to automate manual tasks for repeatable,
accurate and auditable configuration change control. The tools help organizations gain efficiencies
in moving from a monolithic imaging strategy to a dynamic layered approach to incremental
changes. When evaluating products, organizations need to:

■ Evaluate functionality across the life cycle, and not just the particular pain point at hand.
■ Consider physical systems, physical hosts, and VM server provisioning and configuration
management requirements together.
■ Conduct rigorous testing to ensure that functionality is consistent across required platforms.
■ Ensure that tools address a variety of compliance requirements.

Page 50 of 93 Gartner, Inc. | G00251107

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

If private clouds are a focus, it is also important to understand if the cloud management platform
can provide Day 2 server provisioning and configuration management capability or if a separate tool
will be needed to supplement it. If the latter, integration or coexistence will be needed and should
also be part of the evaluation.

Business Impact: Server provisioning and configuration management tools help IT operations
automate many server-provisioning tasks, thereby lowering the cost of IT operations, enforcing
standards, and increasing application availability and the speed of modifications to software and
servers. They also provide a mechanism for enforcing security and operational policy compliance.

Benefit Rating: High

Market Penetration: 20% to 50% of target audience

Maturity: Early mainstream

Sample Vendors: BMC Software; CA Technologies; CFEngine; HP; IBM; Microsoft; Opscode;
Puppet Labs; SaltStack; ScaleXtreme; Tripwire; VMware

Recommended Reading: "Midsize Enterprises Should Use These Considerations to Select Server
Provisioning and Configuration Tools"

"Server Provisioning Automation: Vendor Landscape"

"Provisioning and Configuration Management for Private Cloud Computing and Real-time
Infrastructure"

"Server Configuration Baselining and Auditing: Vendor Landscape"

"Market Trends and Dynamics for Server Provisioning and Configuration Management Tools"

"The Patch Management Vendor Market Landscape, 2011"

"Cool Vendors in DevOps, 2012"

"Cool Vendors in IT Operations Management, 2012"

"Cool Vendors in DevOps, 2013"

Multinode Servers
Analysis By: Andrew Butler

Definition: Multinode servers are designed with a reduced amount of rack, chassis and, in some
cases, motherboard components to maximize server density potential, and minimize cost, material
use and power consumption. Typical designs involve the lack of outside sheet metal coverings
(hence the common term "skinless") over individual servers, as well as shared power and cooling
resources within the rack.

Gartner, Inc. | G00251107 Page 51 of 93

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

Position and Adoption Speed Justification: Blades are not the only form of modular server.
Multinode servers are a more rack-dense form factor that has emerged in the past four years to
address many extreme scale-out workload requirements that, ironically, blades were first designed
to address. True multinode servers typically lack the availability of the richer tooling that benefits
blade environments, so the two form factors address distinctly different workload needs. As the
addressable market for blade servers evolved toward more sophisticated and diverse workloads, a
vacuum in the server market gradually formed as blades became overengineered and overpriced for
their original market objectives. Multinode servers are an alternative form of modular design
developed to fill that vacuum. The terms "multinode" and "skinless" both equally describe this
server segment. Our preference for the term multinode is based on the fact that many of these
server designs are no longer skinless and increasingly resemble blade servers.

At first glance, multinode servers share many attributes with blade servers, which explains why
some vendors regard the markets for blade servers and multinode servers as synonymous. Like
blades, they are designed to slide into a common chassis, enabling the quick and easy addition of
new components, and the replacement of failed components. They may rely on common
components, such as power supplies, cooling fans and input/output (I/O), which become functions
of the chassis, not the multinode server. They usually are based on a standard x86 architecture, run
a regular Windows or Linux workload, and conform to the 19-inch-rack-width standard.

In addition, the emergence of extreme-low-energy servers based on Intel, AMD or ARM processors
(and potentially RISC processors) will open the door to new software stacks. As with blades, the
mounting technology for multinode servers will be dictated by the server manufacturer, and is today
proprietary. Workloads and situations that lend themselves well to multinode server approaches
include applications that share server resources across a network, including extreme analytic and
high-performance computing (HPC) environments. Multinode servers offer an additional benefit:
Because they use less material in the server infrastructure, less material needs to be replaced
and/or recycled. Some blade vendors referenced here are actively marketing multinode server
designs either as an alternative or as a variant of blade server strategies.

The rapid growth of multinode servers has largely contributed to the stalling of the blade server
market, because cannibalization from multinode servers has eroded the growth that blades are
gaining through increased market adoption of fabric-based infrastructure (FBI). Based on Gartner's
2012 data, multinode servers outsell blade servers in unit terms (despite being sold for only a few
years and addressing a more limited volume market). In 2012, multinode servers represented 15%
unit share (nearly doubled from 7.8% in 2011) and 10% revenue share (again nearly doubled from
5.3% in 2011). This represents over 14,000% unit share increase since we started measuring this
market in 2010, and helps explain why this category has made rapid progress through the Hype
Cycle. Because they favor more horizontally challenging workloads, most multinode deployments
favor x86 architectures. However, vendors such as Super Micro Computer (Supermicro), SeaMicro
(now owned by AMD), Dell, Huawei and HP are bringing low-power Intel, AMD and/or ARM-based
servers to market.

Because multinode servers are taking over the low-end workload space traditionally occupied by
blade servers, and because blade servers are usually the foundation for vendor FBI strategies, we
are seeing more adoption of blades in production environments for complex applications such as
high-end database serving, data warehousing, ERP and CRM. This has led to an increasing

Page 52 of 93 Gartner, Inc. | G00251107

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

technology overlap among blade servers, multinode servers and rack-optimized servers, driving a
need for vendors that have presence in all three form factors to be more transparent about
workload optimization for each competing form factor. We recommend that customers continue to
demand valid references and proof points for all workload scenarios that push the upper (or lower)
boundaries of viable multinode server implementation.

User Advice: Workloads and situations that lend themselves to multinode server approaches
include applications that share server resources across a network, including HPC, Hadoop clusters,
cloud computing infrastructures and extreme data analytics. These are often situations where
application scaling takes priority over all other considerations. However, the extreme focus on
density makes multinode servers inappropriate for many workloads. But multinode servers can be
considered as an alternative to low-end blade servers for any workload that will benefit from a
modular server approach. Also consider multinode servers when green IT credentials are important,
because they use fewer components in the server infrastructure, therefore fewer materials need to
be replaced and/or recycled. From an ROI perspective, the greatest potential gains will be achieved
from large-scale deployments in which the operational cost savings versus more conventional
designs are apparent.

Business Impact: Because they permit the rapid and easy addition of resources, multinode servers
— together with other modular designs, such as blades — will have a natural advantage for cloud
evolution. Modern blade server designs often impose more sophistication and attendant complexity
for many applications that require limited vertical scaling, and little of the added-value functionality
of blades. Hence, a multinode server solution can provide significant cost savings over blades,
especially when measured by performance per square meter of data center space. This makes
multinode servers particularly well-suited for situations in which the most extreme possible
horizontal scaling is required. Extreme low-energy servers will become an even better solution for
this scenario as they reach market maturity. Typical industry scenarios are HPC grids, Hadoop
clusters, financial services, advanced analytics, telecommunications and cloud services providers.

Benefit Rating: High

Market Penetration: 5% to 20% of target audience

Maturity: Early mainstream

Sample Vendors: Dell; Fujitsu; HP; Huawei; IBM; SGI; Super Micro Computer (Supermicro)

Recommended Reading: "Magic Quadrant for Blade Servers"

Data Center Container Solutions


Analysis By: David J. Cappuccio

Definition: A data center container is a shipping container set up to accommodate IT equipment


and designed to support servers, storage and networking gear. In addition, containers may be
designed to support some combination of uninterruptible power supply (UPS), generators and/or

Gartner, Inc. | G00251107 Page 53 of 93

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

chillers, with some of that equipment supported in the same containers as the servers and storage
equipment, or in separate and distinct containers.

Position and Adoption Speed Justification: Containers are designed to be weather-resistant and,
in some cases, weather-hardened for use in extreme environments, although most use cases see
them implemented within buildings or shells of buildings. Containers are designed to be TIA-942
Tier 3 capable (depending on supporting infrastructure), and focus on high levels of energy
efficiency, typically with a power utilization efficiency (PUE) of 1.3 or below.

Although scalability and speed of deployment (in some cases less than 12 weeks) are the main
advantages, data center container solutions require appropriate site selection to ensure adequate
physical security. Another advantage is cost. Although these are expensive solutions, they are less
expensive than building an extension to a traditional brick-and-mortar data center. Like any data
center, they need good electrical power and a supply of cold water for cooling. In some countries,
they may be classified as permanent structures and, therefore, need to comply with building
regulations; whereas in other countries, they are classified as temporary buildings. The adoption of
these solutions is increasing, especially as vendors improve configuration choices and energy
efficiency. Additionally, specialized containers for power, cooling and generator support are being
created in order to facilitate modular develop of data centers.

Some self-contained solutions are designed for extreme environments. These self-contained mini
data centers are designed for rugged terrain, complete lights out and fast, semiautomated
deployment. For containers that are configured completely for IT loads, a few vendors are
producing self-contained power and cooling delivery solutions specifically for containers —
essentially, containers to support containers.

User Advice: Enterprises need to qualify their use of container solutions. Those that do not want to
spend a large capital budget on a traditional, brick-and-mortar data center can gain solid cash-flow
benefits by using container solutions. However, the cost of a container solution needs to be
benchmarked against the cost of using a hosting provider to gauge which is the most cost-effective
option.

Enterprises that need capacity and scaling beyond their current and limited capacity may benefit
from this technology, because they can start with a small number of containers and build up the real
estate at their own pace. Containers may also suit enterprises that require data center services in
remote and nonurban locations, such as desert regions and forests. However, getting a water
supply to these areas will be problematic. Hosting providers looking for fast deployment of data
centers, and flexibility in capacity and expansion, should evaluate container solutions.

Negotiate a complete life cycle contract with container suppliers to include maintenance, support
and asset disposal. The life cycle of a container likely will be more than five years, by which time the
server hardware typically will need to be replaced. Refurbishing a container that has been designed
for a specific set of applications or hardware will probably be expensive, and enterprises need to
factor this into their cost equations.

Business Impact: Data center container solutions can provide an alternative to the capital needed
for a brick-and-mortar data center. A typical container can cost approximately $2 million to $4.5

Page 54 of 93 Gartner, Inc. | G00251107

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

million, fully configured. Because they are designed to the user's technical specifications, data
center containers can provide significant levels of computing power that can be delivered in eight to
12 weeks, rather than the 18 to 24 months it would take to build a comparable data center.

Benefit Rating: Moderate

Market Penetration: 1% to 5% of target audience

Maturity: Adolescent

Sample Vendors: Active Power; APC-DataPod; AST; Cisco; Dell; Emerson Network Power; HP;
Huawei Symantec Technologies; IBM; IO; Lampertz; Oracle; Power Assure; SGI

Recommended Reading: "Is a Container a Viable Data Center Alternative?"

Climbing the Slope

HPC-Optimized Server Designs


Analysis By: Carl Claunch

Definition: High-performance computing (HPC) clusters can be composed of large numbers of


servers, for which total size, energy use and heat output can become serious issues. Because small
design differences amount to large benefits when multiplied by the number of servers in the
clusters, we are seeing the emergence of server designs that are optimized for HPC. Some
requirements for dedicated communication among the servers are unique to clusters.

Position and Adoption Speed Justification: In the past, the only options for machines that were
unique, and different from broad-volume server products, were to buy from HPC-oriented providers,
from specialty providers such as SGI or as one-off products in a large, supercomputing contract.
However, many traditional server vendors now produce HPC-optimized designs, having recognized
the increases in HPC use and in the size of that segment as part of the overall server market. So far,
most designs have been dual-purpose for large Web properties and HPC users, although the
market opportunity for large Web installations is easily the largest.

As HPC-oriented spending continues to climb, vendors will increasingly view the market size of an
HPC-specific design to be large enough to justify their incremental design, test and support costs;
at a minimum, the vendors can create one design that straddles the needs of HPC and large-scale
Web properties. For example, few providers outside HPC are interested in hosting several high-end
graphics cards inside each server, although this need is starting to be addressed by several
providers. Because the vendors that introduce the first models may command market success,
spurring other vendors to create their own HPC-optimized designs, competitive dynamics will come
into play.

We expect this market segment to move quickly toward maturity, because the development of the
designs is gated mainly by the vendors' willingness to add specialized products, and not to
continue offering a few common designs created for the general market. Some designs are targeted

Gartner, Inc. | G00251107 Page 55 of 93

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

at large, public Web operators, such as search engine firms, that deliver similar benefits in HPC
clusters, but the designs can also offer features appropriate only for HPC users. For example, if a
system were designed to provide a different cluster interconnection, significant cabling reductions
might be possible. Another requirement that is increasingly added to HPC clusters are graphics
cards for use as application accelerators — buyers value designs that allow reasonable numbers of
cards to be configured without ballooning beyond the overall space requirements.

There has been continued movement in the server market during the past year, with several vendors
focused on creating more workload-specific or optimized special-purpose designs that can support
higher margins than general-purpose machines. This change is reflected in a further shift in position
of this technology from 2012, and we expect HPC-optimized servers to move fully to the end of the
Plateau of Productivity by 2014.

User Advice: Organizations making large HPC cluster acquisitions should add HPC-optimized
servers to their procurement shortlists, gauging any incremental increases in purchase price against
the benefits offered in total cost, space requirements and operations costs.

Business Impact: Optimized systems can enable a substantially faster computing cluster to fit into
a given space or data center than standard server designs. This enables the business to improve
product capabilities, speed engineering work and gain a competitive advantage in financial trades
or similar outcomes intended from the deployment of the HPC cluster. These designs also can
reduce energy costs, minimize heat problems and lower cabling costs.

Benefit Rating: High

Market Penetration: 5% to 20% of target audience

Maturity: Early mainstream

Sample Vendors: Bull; Cray; Dell; HP; IBM; Oracle; SGI; Supermicro

FPGA Application Acceleration


Analysis By: Carl Claunch

Definition: Field-programmable gate arrays (FPGAs) can be programmed to implement complex


circuitry. Many functions that are slow to accomplish with software can be rapid when performed
with electronic circuitry.

Position and Adoption Speed Justification: The implementation part of an application in an FPGA
demands skills that are different from those of a software programmer. A hardware circuit is
designed and then coded into a hardware description language (HDL). The HDL is then processed
to create the programming for the FPGA, turning it into the specified circuit. A driver loads the
programming into the FPGA to prepare it before the application runs. The application needs to
communicate with the FPGA at appropriate times, transferring arguments and retrieving results.
Because this process may not deal with standard algorithms, programming languages or design
concepts, it may require an electronics engineer to team up with the software developer to produce
the accelerated application.

Page 56 of 93 Gartner, Inc. | G00251107

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

The relative difficulty of designing for FPGAs has historically limited their adoption in the high-
performance computing (HPC) community, despite high interest several years ago due to the
potential of FPGAs to speed up certain types of applications. However, FPGAs are seeing a small
renaissance due to vendors such as Maxeler Technologies that have developed ways of
significantly mitigating these challenges, delivering packaged solutions or offering a development
process that is more accessible to software engineers, thus lowering the level of skills needed by
users to exploit FPGAs for acceleration. Other vendors are developing middleware that will hide the
FPGAs' complexities, offering standard, reusable functions as a pure software interface. It may be
possible to modify or restructure an algorithm, thus controlling data movement and the processing
sequence, to make the converted algorithm run faster in the FPGA than the original algorithm would
have run in a traditionally designed FPGA implementation, and faster than in an unaccelerated
server.

Some requirements are still best addressed with hardware approaches. For example, most
encryption and decryption algorithms were designed in part to be hard to accomplish in software.
That high work factor helps keep the codes relatively safe from intensive key-breaking attacks. Each
try at the key has to go through the relatively heavy work of the algorithm (in software), making the
time to run the necessary number of trials unfeasible. A hardware circuit can accomplish the task in
a more-straightforward way, thus more quickly. Multiple types of processing tasks that take a long
time with a software algorithm can be done efficiently with hardware. When an HPC application
requires functions that are slow to do in software, the developer might specify circuitry to
accomplish those functions in an FPGA, which can be added to the server. The application
becomes heterogeneous and makes use of the FPGA to accelerate its overall operation by using
the hardware device to handle the functions.

This technology will appeal to organizations with in-house applications that include functions
appropriate for FPGA acceleration, and where the improvement is massive enough to warrant a
one-off custom effort. It will also appeal to companies that are commercial software makers and
can leverage the investment in the FPGA across many licenses, and that might gain some
significant market advantage from the improvement in application performance.

FPGA application acceleration is a specific way of accelerating HPC applications. It is an early type
of the heterogeneous system, as is the position for graphics card application acceleration, and for
cell broadband application acceleration. However, these are all implemented in one-off, improvised
and, in some cases, crude ways.

User interest has recently shifted to graphics cards as a means of accelerating applications, with
relatively few clients investigating FPGA acceleration, although the levels are rebounding slightly.
The methods used and the types of applications that can benefit from them are different for FPGA
and graphics cards; thus, the former will never be fully supplanted by graphics card acceleration.
The HPC market is more open to heterogeneous approaches today, with a mix of technologies
considered equally worth using, each in its area of strength. In the past, the market tended to focus
on one type of accelerator at a time; now, technologies such as Nvidia graphics cards, Intel Xeon
Phi, FPGAs and even some digital signal processors (DSPs) are actively being deployed.

Gartner, Inc. | G00251107 Page 57 of 93

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

User Advice: If important business advantages cannot be attained because HPC applications can't
reach the necessary performance levels due to cost, space or other constraints, and because the
nature of the application includes functions that could be handled more efficiently with a hardware
circuit than by a pure software approach, then investigate the potential for accelerating code with
an FPGA.

For cases that appear sufficiently promising, evaluate the challenges, costs and risks of undertaking
an FPGA-accelerated implementation, as well as the fit of such an experimental approach with your
organization and business culture of innovation and risk taking.

Only proceed when the promise is high, the cultural fit is good and the value of success is large
enough to compensate for the downsides.

Other organizations should not implement FPGA application acceleration, but should continue to
track it and the more-general case of heterogeneous architectures for when their maturity and risk
might align better with their requirements.

If commercially available software is configured by an independent software vendor to use FPGAs


for acceleration, the risks and complexities are almost totally shielded from the user. This scenario
would be a suitable choice for a broader set of potential buyers.

Business Impact: The use of an accelerator can produce far more effective application
performance. To the extent that higher accuracy levels, quicker results or better understanding of
risks can improve business results and competitiveness, FPGA-accelerated applications can be
feasible.

Benefit Rating: High

Market Penetration: Less than 1% of target audience

Maturity: Adolescent

Sample Vendors: Altera; Convey; Maxeler Technologies; Xilinx

Graphics Card Application Acceleration


Analysis By: Carl Claunch

Definition: Graphics card application acceleration is a specific method for accelerating high-
performance computing (HPC) applications. The user's application is converted to contain
segments of code for both general-purpose and graphics card processors, using the graphics cards
to accelerate overall performance. It is an early type of heterogeneous system, still implemented in
one-off, improvised and, sometimes, primitive ways.

Position and Adoption Speed Justification: Modern graphics cards can contain several hundred
processing cores, used to simultaneously process hundreds of pixels in parallel to speed up the
task of rendering an image. These cores can also be used to perform mathematical operations on
application data, with hundreds of arithmetic operations taking place simultaneously. In a typical

Page 58 of 93 Gartner, Inc. | G00251107

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

server, where the total number of cores is usually between 16 and 64, gaining access to as many as
512 additional cores for each graphics card in the server delivers a large increase in theoretical
capacity. When a major portion of an application's execution time is spent applying a sequence of
calculations to a large pool of relatively independent data elements, it is a good fit to take
advantage of the data-parallel nature of graphics cards. This is sometimes referred to as general-
purpose programming on graphics processing units (GPGPUs) or GPU computing. GPUs are being
used to generate increasingly accurate risk projections for trading, model structures and designs in
life sciences, engineering, electronics and many other fields; to find and more efficiently extract oil
and gas reserves; to replace the physical testing of complex and expensive products; and in many
other industry segments in which computing-intensive applications are in use.

The programmer has to explicitly divide the application into a portion to be run on graphics cards
and then update the remainder of the application to communicate with the graphics card. At one
time, exploitation required that the portion of the algorithm that will run in the graphics card needed
to be structured to look like it is a graphical task, shading and manipulating pixels. The need for
graphics programming skills, the challenge of twisting an algorithm to seem graphical, and the
limitations in languages and capabilities available to the programmer had limited the use of graphics
card acceleration; however, this has been rapidly changing.

New, more-straightforward architectures and interfaces have arrived, supporting a range of


languages, and they don't need to mutate every program into a graphics task. This allows graphics
card acceleration to be more accessible to developers. Many vendors, including almost all the
major server vendors, are rolling out products that use, or help the use of, graphics cards for
accelerators:

■ Microsoft has tools in its Visual Studio suite for the design, coding and debugging of these
environments.
■ Apple includes support for graphics card acceleration in OS X Snow Leopard.
■ Adobe exploits GPGPU for physics simulation.

Other vendors are concentrating on ways to abstract away the complexities or enable even easier
ways to harness GPU computing, such as by enabling acceleration in libraries, middleware and
other components. This helps drive down the skill level required for a user to take advantage of
graphics card acceleration.

Graphics cards used for application acceleration are still managed through device drivers, rather
than handled as another processor type capable of executing code segments. This indicates that
the OSs have not built proper support for this and other types of processors within a heterogeneous
system. Heterogeneous architectures offer a more-general, powerful and long-term approach to
application acceleration. Nvidia has announced that it intends to build general-purpose processors
based on ARM into its graphics systems to support a rich OS environment onboard. This will enable
more powerful and complex software to be targeted to run on the graphics cards. This moves this
technology further toward the achievement of integrated heterogeneous systems.

Gartner, Inc. | G00251107 Page 59 of 93

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

Uptake of graphics cards for acceleration of HPC codes has reached mainstream proportions.
There is wide adoption and significant industrywide focus on porting code to leverage GPUs, which
driving its rapid movement along the Hype Cycle.

User Advice: For in-house HPC code:

■ If important business advantages can't be attained because HPC applications can't reach the
necessary performance levels (limited by cost, space or other constraints), but the nature of the
application includes functions that could be handled more efficiently with highly parallel
hardware, then invest in a deep study of the potential for accelerating code with graphics cards.
■ For cases that appear sufficiently promising, include in the business case all the challenges,
costs and risks of undertaking a graphics processor accelerated implementation, as well as the
fit of such a leading-edge approach to your organization and business culture in terms of
innovation and risk taking.
■ Only proceed when the promise is high, the cultural fit is good and the value of success is great
enough to compensate for the complications and risks.
■ Others should not implement, but should continue to track this, as well as the more general
case of heterogeneous architectures, for when their maturity and risk might better align with
their needs.

For supported software products:

■ When vendor-supported application software includes acceleration with graphics cards, most
of the barriers that impact use with in-house code go away; hence, users should accept that a
supported product using GPGPU is sufficiently mature and suitable for adoption.

Business Impact: The use of an accelerator can produce a substantially higher effective
performance for some HPC applications. To the extent that higher accuracy, quicker results and/or
better understanding of risks can improve business results and competitiveness, graphics card
accelerated applications can make that feasible.

Benefit Rating: High

Market Penetration: 5% to 20% of target audience

Maturity: Early mainstream

Sample Vendors: AMD; Intel; Nvidia

Capacity on Demand (Unix)


Analysis By: John R. Phelps

Definition: Capacity on demand (COD) is the availability of inactive components — processors,


memory and input/output (I/O) adapters — in systems that can be activated rapidly on demand.

Position and Adoption Speed Justification: There are five different categories of COD:

Page 60 of 93 Gartner, Inc. | G00251107

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

■ Replacement of failed components


■ Permanent upgrade of systems
■ Temporary upgrade to handle disaster recovery situations
■ Temporary upgrade to handle planned events, such as data center reconfigurations or moves
■ Temporary upgrade to handle spikes in workload requirements

The more categories supported, the more sophisticated the implementation. The dynamics and time
required to activate components will vary across implementations — for example, the automatic
sparing of failed processors in seconds versus the manual activation of a permanent processor
upgrade, which might take hours or a day or more, depending on how much has been set up, or the
need to take the system down to add or take away the COD component, which might take hours.
Some resources may be added, and then later taken away. Other resources may only be able to be
added, and not taken away, because of technology, marketing practices and/or licensing practices.

Component and packaging technologies are advancing. In many cases, it's easier and less
expensive (in the long term) to populate systems with standardized packages that include extra,
inactive components (although this has been less prevalent in Unix systems than in mainframe
systems). In the Unix environment, vendors tend to offer this capability, but at a price that may
include a nonrecoverable premium. This will enable the rapid deployment of additional capacity,
while potentially reducing upfront costs.

The ability to turn temporary capacity on and off is available from more vendors; however, in many
cases, independent software vendor (ISV) licensing hasn't been resolved. A lack of software vendor
support for temporary capacity upgrades is the major inhibitor to its common adoption. Also, the
temporary upgrade to handle workload spikes may be subsumed by growing virtualization
capabilities, such as IBM's PowerVM Live Partition Mobility and Live Application Mobility and HP's
Online VM Migration. Unix system vendors vary in their ability to provide automatic sparing of
processors, with some providing only manual sparing. Disaster recovery backup COD is provided in
most cases, with special configurations or manually piecing together unique disaster recovery COD
solutions using other COD capabilities.

During the past year, we have seen little new support, but a maturing of the capability; therefore, we
have moved this technology only nominally on the Hype Cycle.

User Advice: Carefully examine the impact of software billing on temporary processor COD to see
whether total savings can be achieved, compared with other methods. Premium pricing that comes
from having extra components preinstalled and inactive may affect savings, so it is critical to
understand any upfront or nonrecoverable premium pricing. Also, the price of an activation,
combined with the minimum time the capacity can be activated, should be compared with the
number of times it might be activated and the length of time needed during each projected
activation during the projected life of the system. This should be compared with the price of
purchasing the component. Ensure that there is no time deadline to force the purchase of extra,
inactive components, which may not be needed yet.

Gartner, Inc. | G00251107 Page 61 of 93

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

Business Impact: COD affects multiple areas. Automatic sparing of failed components at no extra
cost provides enhanced system availability. The ability to provide for backup COD is causing many
businesses to consider providing their own in-house disaster sites, rather than using third-party
providers (we have seen this less in the Unix environment than in the mainframe environment).
Temporary COD will affect applications across all industries in which transaction volumes vary
widely, as well as those with temporary spikes in workload demand. The attractiveness of any COD
offering will vary, based on the potential cost of delays in providing capacity rapidly, or the extra
costs of overprovisioning capacity.

Benefit Rating: Moderate

Market Penetration: 20% to 50% of target audience

Maturity: Early mainstream

Sample Vendors: Fujitsu Technology Solutions; HP; IBM; Oracle

P2V Server Management


Analysis By: Philip Dawson; Andrew Butler

Definition: Physical to virtual (P2V) server management is defined as products that manage the
hypervisor and all virtual machine (VM) instances that run on servers. The needs, capabilities,
installed bases and maturity of offerings managing these virtual objects are different from the tools
just managing the hardware, which we track under virtual-to-physical technology.

Position and Adoption Speed Justification: Hardware management tools have always been
largely proprietary and attach rates vary from platform to platform, and from vendor to vendor.
Blade servers and other modular designs continue to be managed and deployed through the server
administration tools that only the hardware vendor of choice can provide. Together with the tools
from the OS and VM vendors, this creates a minimum of three administrative layers. Although the
adoption of tools from vendors such as HP, IBM, Cisco Systems and Dell varies, the tools are
increasingly being developed and used to manage, deploy and distribute virtual input/output (I/O),
OS and VM software on blades and server fabrics as integrated infrastructure, as well as related
high-availability services and storage functions.

User Advice: If you have mature installations of server administration tools, then consider extending
these to virtualized environments for consolidation, as well as for virtual I/O and other workload
migration capabilities. In addition to maximizing the return on investment, the continuity offered by
established tools can improve consistency and productivity by building on proven and trusted
administrative processes.

Business Impact: The convergence of P2V tools and functions will increasingly drive the usage of
virtualization beyond consolidation, as well as deliver more-agile and mature virtualization benefits,
such as improved high availability and disaster recovery, rapid provisioning of new assets,
chargeback and live migration across physical boundaries. These benefits are often difficult to
assess separately from the move to blade and fabric form factors as integrated systems, because
the form factor often is changed and is introduced at the same time as the server management tool.

Page 62 of 93 Gartner, Inc. | G00251107

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

To perform disaster recovery, these P2V management tools have to manage much more than the
server. They have to interface with the network and storage, and move the virtual hosts to the new
locations, including the data — the foundation to software-defined anything (SDx). Increasingly,
storage virtualization allows traditional network storage to replicate data and storage domains and
assist migration of the failing and live VMs.

Benefit Rating: High

Market Penetration: 20% to 50% of target audience

Maturity: Mature mainstream

Sample Vendors: Cisco Systems; Dell; Fujitsu; HP; IBM; Oracle

Recommended Reading: "Use Sizing, Complexity and Availability Requirements to Determine


Which Workloads to Virtualize"

Server Virtual I/O


Analysis By: Philip Dawson; Andrew Butler

Definition: Server virtual input/output (I/O) facilitates the shift of a workload or workloads among
servers — typically highly virtualized, x86-based blade servers (although not exclusively). Virtual I/O
demand is not exclusive to blades, but blade server topology is particularly well-suited to the
technology, and the rack-dense nature of blades more easily justifies the investment in virtual I/O.
This I/O flexibility benefits scale-out application workloads and, increasingly, DBMS for
transactional and analytic use, as well as for HA and DR assistance.

Position and Adoption Speed Justification: Virtual I/O works by providing a layer of abstraction
between the server and the I/O devices so that, if an OS and application are moved — either
manually or dynamically — from one physical server to another, then the Media Access Control
(MAC) and/or World Wide Name (WWN) address can stay with the OS and/or application. In other
words, without this type of virtual I/O, a new MAC or WWN address would have to be assigned
each time an OS and/or application is moved to a new physical location. This type of server virtual
I/O can be implemented in a fabric switch at the firmware level or on a chip.

The demand for virtual I/O for servers is growing steadily, as blade-based platforms increase their
footprint and the blade user community routinely adopts virtualization technology for new
implementations. This capability is available through multiple vendor strategies. Solutions such as
HP Virtual Connect (deployed on the HP BladeSystem c-Class) and Cisco's Unified Computing
System (UCS) offer a switch-based structure. Third-party vendors, such as Xsigo Systems (which
was acquired by Oracle in 2012), also offer hardware-based solutions that can be deployed with
blades and rack-optimized servers from multiple hardware vendors. The alternative option is to
deploy at the firmware level; this is feasible with blade chassis from several vendors, including HP,
IBM, Dell and Oracle. Firmware-based virtual I/O permits MAC address virtualization within the
standard design.

Gartner, Inc. | G00251107 Page 63 of 93

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

User Advice: The adoption of advanced virtualization technologies (for example, live migration)
helps quantify the investment in virtual I/O. This technology also sits at the heart of most fabric-
based infrastructure initiatives. Evaluate which type of I/O virtualization is required (MAC and/or
WWN) and which is best-suited for a given environment (for example, firmware or switch level) to
set up appropriate selection parameters. Consider the proprietary nature of each virtual I/O
implementation, and determine which vendor will provide appropriate support for it. Understand the
time frame around the proprietary requirements to reach a standards-based solution in a five-year
window. Evaluate virtual I/O switch offerings in the context of storage area network (SAN) switch
requirements to ensure compatibility.

Business Impact: Server virtual I/O makes it easier to install and manage servers by reducing the
time and cost associated with moving an OS and/or application from one server device to another.

Benefit Rating: High

Market Penetration: 20% to 50% of target audience

Maturity: Early mainstream

Sample Vendors: Cisco; Dell; Exar; Fujitsu; HP; IBM; Oracle

Recommended Reading: "Introducing the Technology Dependency Matrix for Fabric-Based


Infrastructure"

Offload Engines
Analysis By: Carl Claunch

Definition: Offload engines move processing for specific functions — such as TCP/IP stack
processing, XML processing, Java Virtual Machine (JVM) processing and complex mathematical
computations — from general-purpose processors onto one or more specialized components.
Users cannot write programs to run on the offload engines, which are restricted to providing only
defined functions.

Position and Adoption Speed Justification: Offload engines provide advantages in software
licensing, performance and scalability by using specialized or hidden processors to handle a
standard server processor. This frees the server processor to handle the operations in which it
excels. In some cases, offload engines can change the economics by reducing the size of the
general server that is needed, thus cutting software license charges for any software installed on it,
even if the offload engines aren't faster at running the offloaded tasks.

To efficiently accomplish the offload, the OS must participate in the offloading process, and the
connection between the offload engine and the server's main memory must be quick, efficient and
scalable. In many cases, the OS uses fast connections, such as Peripheral Component Interconnect
Express (PCIe). In others, it may implement direct memory access. This is why OS support for
offload engines is hindering the broader implementation of this technology.

Page 64 of 93 Gartner, Inc. | G00251107

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

Furthermore, because many server processors are underused, the desire to offload workloads for
efficiency is less compelling until consolidation takes place. Finally, offload engines must be
completely invisible at the application layer, and not convertible by users to general-purpose use to
gain software vendors' support for ignoring those offload engines in their licensing fees.

Despite these challenges, offload engines will be facilitating new levels of server performance and
scalability in the next two to three years. Input/output (I/O) and bus-connected offload engines have
been around for more than four decades. AMD and Intel have invested in specifications to better
support offload engines at the chipset level. As another example of the use of this technology, IBM
has supported offload engines in System z for at least a decade and other types of offload engines
for much longer (such as cryptographic coprocessor). Many high-speed Ethernet cards contain
TCP/IP offload engines (TOEs), which accomplish some processing that would otherwise require
substantial amounts of processor capacity in the server.

The drive by system vendors to produce differentiated servers for specific workloads or roles will
lead to more use of offload engines as a means of delivering a clearly different experience for the
user of those differentiated systems.

If the offload engine is not dedicated to specific functions, but could run any code suitably created
for it, it falls into the category of heterogeneous architectures.

User Advice: Enterprises should evaluate offload engine technology for tasks that are dragging
down conventional server processors. This includes TCP/IP stack handling and JVM processing.
Big-data-related processing is expected to be the next target of offload engines. However,
enterprises must ensure that the OS vendor and the surrounding software ecosystem will fully
support offloading. Furthermore, investments in offload engines are necessarily locked to that role,
whereas money spent on general-purpose server systems can be protected if the original need
goes away because the general-purpose processors can simply be assigned to run other software.

Business Impact: Offload engines aren't conventional processors, or they are permanently
inaccessible to the users, except for their specialty role. They can increase server performance
without increasing the licensing costs associated with those processors.

Benefit Rating: Moderate

Market Penetration: 5% to 20% of target audience

Maturity: Early mainstream

Sample Vendors: AMD; IBM; Intel; Oracle

Shared OS Virtualization (Nonmainframe)


Analysis By: John R. Phelps

Definition: Shared OS virtualization is used to provide an environment that enables multiple


applications to use one instance of an OS. It enables the dynamic allocation of OS resources to

Gartner, Inc. | G00251107 Page 65 of 93

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

each application without affecting the operation of other applications. Shared OS virtualization
requires three components to make a suitable environment: isolation, transparency and workload
management. This is different from virtualization using a hypervisor, sometimes known as
partitioning virtualization.

Position and Adoption Speed Justification: Isolation ensures that different applications cannot
affect each other, from an availability or security aspect. Transparency enables applications that
were running in their own physical systems to run in one system, without having to modify the
applications or any common addresses they may have been using. Workload management enables
managed and dynamic control of resource distribution to different applications, based on priorities
and business goals.

Various technologies provide these three components — from the base OS providing all three
components, to products that enhance workload management and depend on the OS to provide
isolation and transparency. However, these products are not fully developed in many nonmainframe
environments, and application requirements, and independent software vendor pricing and support
models inhibit adoption in many cases. Unix environments have moved faster in this area than Linux
and Windows. The three major Unix products (Oracle's Solaris Containers, IBM's AIX Workload
Partitions and HP's HP-UX Secure Resource Partitions) are maturing to the point that most users
believe it is safe to run multiple applications under one copy of the OS. Enterprise Linux vendors are
starting to offer commercial support for Linux Containers.

Shared OS virtualization is beginning to climb the Slope of Enlightenment because of its increased
use across Unix and Linux platforms. It has been a slow movement because Windows and Linux
(other than Parallels' support of Linux) capabilities still lag behind mainframe and Unix capabilities,
as most x86 vendors concentrate on the partitioning virtualization of VMware, Hyper-V, KVM and
Xen. We believe that because of the success of partitioning virtualization in the Windows and Linux
environment, as well as the cost to add the needed modifications to support shared OS
virtualization, the Windows environment will not see much improvement in shared OS virtualization
support. This lag in Windows is why this technology is not positioned farther along the Hype Cycle.
The 20% to 50% penetration and the movement from last year is based on Unix and the more
recent work on Linux Containers and not on the Windows environments. Based on the shift from
Unix to Windows and Linux, and the fact that Windows and Linux were more involved with
partitioning virtualization, this technology had a high possibility of becoming obsolete before
reaching the Plateau of Productivity. However, with the more recent work on Linux Containers,
which is now in open-source software and being injected into Linux Kernel (changes based on
OpenVZ Parallels/SUSE/Oracle efforts), it now has a chance to continue to move to the Plateau of
Productivity.

User Advice: A crucial difference between shared OS (or container-based) virtualization and
partitioning (or hypervisor-based) virtualization lies in the fact that, in hypervisor-based virtualization,
the OS is not shared by all applications. This means it suffers from the overhead of running multiple
copies of the OS. While this gives the flexibility of using OSs of different versions, or even
completely different OSs, it suffers the downside that running multiple OSs consumes more
physical resources than a single, shared OS. So when deciding on the type of virtualization to run
on a server, consider that shared OS virtualization provides lower overhead and better shared-
resource usage, but a less-flexible application environment.

Page 66 of 93 Gartner, Inc. | G00251107

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

Increased emphasis on shared OS virtualization in the Unix world has established it as a more
common environment, but it remains nascent for Windows and emerging for Linux environments.
Carefully examine the separation and transparency provided by the OS platform to understand
whether it is enough for the environment. Also, examine the methods (percentages, shares or goal-
based) used to balance resources across the applications that are executing at the same time, as
well as the resources (CPU, memory occupancy, storage and network priority queuing, and
bandwidth) that control application priority to ensure they have the capability to match workload
performance to business goals, in a constantly changing environment. Do not consider this
technology for running test, development and production on the same physical server. Be cognizant
of the impact of running the applications on one patch level and version of the OS, and the impact
of planned and unplanned outages on combined workloads.

Business Impact: Using shared OS virtualization reduces the number of OS instances


organizations must manage and maintain, which reduces labor costs. Using shared OS virtualization
for server consolidation enables more efficient use of hardware resources, lowering the required
number of physical servers.

Benefit Rating: High

Market Penetration: 20% to 50% of target audience

Maturity: Early mainstream

Sample Vendors: HP; IBM; Microsoft; Oracle; Parallels

x86 Servers With Eight Sockets (and Above)


Analysis By: Carl Claunch

Definition: These servers are based on x86 technology (Intel Xeon or AMD Opteron) and offer eight
or more processor sockets. Servers with higher socket counts can deliver greater vertical scaling for
applications and virtualization deployments that demand additional processor or memory
performance. Larger symmetric multiprocessing (SMP) servers can also be deployed as
consolidation and virtualization hosting servers, as well as platforms to support high-performance
computing applications that require vertical memory aggregation.

Position and Adoption Speed Justification: Most x86 server shipments are designed in one- and
two-socket versions (even the market for four-socket servers represents only a small percentage of
sales), so the market for eight-socket servers (and larger) represents only a tiny fraction of the
market.

Application development trends for Windows and Linux favor a scale-out approach that assists the
market shift toward smaller-capacity servers. If you're planning to deploy larger servers to
consolidate multiple smaller workloads, compare the cost of smaller form factors that could offer
more price performance than one larger design. Watch for limits in the virtualization or consolidation
technology that restrict these products to systems with smaller numbers of sockets. Applications

Gartner, Inc. | G00251107 Page 67 of 93

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

for which capacity needs to increase rapidly can quickly outgrow the maximum scaling of more-
standard four-socket x86 servers.

Because the market for x86 servers with more than four sockets is narrower, relatively few vendors
sell servers with eight sockets and above (although the list is growing). As many as eight sockets
can be implemented with the latest Xeon processors using less engineering and at a lower cost,
because of the capabilities designed into the standard products by Intel.

At one time, custom chips and special expertise were required to connect more than four sockets,
but these functions are now available in processor chips or standard high-volume chipsets that
make it easier to create and sell large x86-based systems. These standard capabilities lower the bar
for entry to the eight-socket-and-larger market. Despite the modest market size, many vendors
(Fujitsu, Bull, Unisys and IBM) are investing in the segment as new workloads emerge that can
leverage the high vertical scaling, memory capacity and input/output (I/O) connectivity that such
systems enable, especially given the relatively higher margins these large socket count servers
command.

Some classes of applications will always require a larger physical server design. Most mature
vertical applications are deployed on reduced instruction set computer (RISC), Itanium or mainframe
servers; however, some high-volume, high-update-rate database servers are increasingly moving to
Windows and Linux. The growing maturity of these x86 OSs, and the desire for standard
approaches across the data center, are enough to justify a small but tangible market for larger-scale
x86 servers.

Other applications, such as hosted virtual desktop (HVD) and telepresence, are examples of
workloads capable of straining smaller form factors. Vendor choice is more restricted in this market,
and some vendors that focus their energies on high-volume markets may never choose to address
the market. Large SMP x86 servers are more likely to be deployed as business-critical platforms,
which enables specialist vendors to profitably address the market with higher-touch services.

The long-term viability of the eight-socket-and-larger market will be determined by the pace of
adoption of alternative technologies, such as extreme-low-energy servers and fabric computing,
because these emerging technologies could make the need for single, large-scale x86 servers
obsolete before the eight-socket technology reaches the Hype Cycle's Plateau of Productivity. This
does not mean the market need for large, single images will dissipate, but the market need will
increasingly be addressed by fabric-based servers that can be aggregated to satisfy combinations
of scale-up and scale-out requirements.

User Advice: Although x86 servers with more than four sockets may appear to be excellent
consolidation servers, verify that the larger CPU count does not create software-licensing penalties,
such as steeply increased Linux subscription costs. Validate whether independent software vendor
(ISV) licensing policies are applied to the sockets, cores or threads. Ensure that the virtualization
software can run efficiently with the high CPU counts found in those machines. If deploying larger
x86 servers as virtualization hosting platforms, check that the memory and I/O bandwidth are
sufficient to support the number of virtual machines that could be deployed.

Page 68 of 93 Gartner, Inc. | G00251107

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

Business Impact: Using a smaller number of larger-capacity servers can reduce administration
overhead and streamline the hardware footprint in the data center. The growing focus of
mainstream x86 vendors, such as IBM and HP, will be on introducing volume economics, and on
aligning with users deploying a greater number of small to midsize machines.

Benefit Rating: Moderate

Market Penetration: 5% to 20% of target audience

Maturity: Early mainstream

Sample Vendors: Bull; Fujitsu; HP; IBM; NEC; Oracle; SGI; Unisys

Recommended Reading: "Cool Vendors in the Server Market, 2013"

"IT Market Clock for Server Virtualization and Operating Environments, 2012"

"Market Trends: 21 Signposts for the Future of the Unix Market"

Linux on Four- to 16-Socket Servers


Analysis By: George J. Weiss

Definition: This technology involves the ability of the Linux OS to support servers capable of
symmetric multiprocessing (SMP), using as many as 16 socket servers.

Position and Adoption Speed Justification: Linux is no longer untested in production


environments on servers using as many as 64 processors; however, it is uncommon. Recent Linux
releases support as many as 4,096 cores (theoretical). However, in actual deployments, most
systems do not exceed four sockets and 40 cores. Nevertheless, Linux can support the largest
scalable Xeon servers from Intel OEMs. Nonuniform memory access (NUMA) architectures can
logically scale to 4,092 cores (e.g., SGI). Vendor benchmarks and production installations indicate
that Linux can support applications using online transaction processing (OLTP), with performance
comparable to Unix on 16-processor SMP servers. Many vendors — including HP, IBM, Oracle,
Fujitsu, Bull, NEC and Unisys — are shipping Linux-based systems for business-critical
applications. Many of these systems span NUMA architectures, particularly in scientific
applications, and cover multiple architectures, including zLinux and IFLs on IBM mainframes. More
recently, in-memory configurations, such as SAP Hana, with at least four sockets for workloads that
benefit from large memory, have come into use.

Scanning the most recent Transaction Processing Performance Council TPC-C benchmarks from
2013 and 2012, of the five SMP configurations submitted and approved, four results were for Linux
ranging in performance from 1.3 million transactions per minute of TPC-C (tpmC) to 5 million tpmC.
Oracle's most recent T5-8 server registered 8.5 million tpmC on Solaris, with later availability in
September 2013. The Linux configurations were on Oracle and IBM database management systems
(DBMSs) and were run on Red Hat 6.4 and 6.2 and Oracle Unbreakable Kernel.

Gartner, Inc. | G00251107 Page 69 of 93

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

If the top 10 overall TPC-C benchmarks are taken (dating to 2007), five of them were on Linux
ranging between 2 million tpmC and 5 million tpmC. Although Gartner always cautions against
making decisions based on synthetic benchmarks, due to vendor skills at tuning the systems, on a
relative comparison, the results indicate that Linux configurations are now scaling to within 80% of
the benchmark performances of Unix. (The one exception is the Oracle SPARC T5 configured with
128 cores and 1,024 threads.) Linux configurations are generally at five times the lower system
configuration pricing.

Platform vendors most likely to run such an expensive benchmark have a vested interest in
protecting their Unix franchises, so they might be hesitant about pursuing Linux benchmarks, other
than in lower-end configurations.

Run specifically targeted benchmarks, because different workloads will yield different results.
Considerable progress in scaling continues on Linux by the kernel community, with increasing
attention being paid to network performance and virtualization. Many factors will influence the
degree of scalability and performance: proximity of the application to the processor, memory size,
data location, paging, caching algorithms, network performance, interrupt handling, file system, etc.
The complexity is the variety of system-related functions related to the applications that can affect
performance positively or negatively. The positive is the technical and development skills in the
kernel community, Linux distributors and OEMs that create a continuous cycle of innovation, test
and QA. This helps innovate and resolve problems.

In the meantime, next-generation applications will primarily drive scale-out architectures, with Linux
a primary OS target. SMP systems of high socket count will be less in demand. Users will be at
different points on their risk mitigation curves in deploying Linux on higher-performance SMP
servers. Therefore the high-end, 16-socket market is likely to remain a minor part of the total Linux
server market (single-digit percentage of total revenue) during the next year. Its larger purpose will
be as proof points of Linux scalability.

User Advice: Users at a fork in the road on Unix upgrades should take this opportunity to evaluate
Linux scaling; reliability, availability and serviceability (RAS); and costs on current Unix workloads
for comparison with Unix. Linux is mature and scalable enough to run most ERP applications and
DBMSs. During the past five years, the bias of deploying Unix (versus Linux and Windows) has
shifted significantly toward Linux (or Windows), including DBMSs.

Users can be assured that x86 servers offer sufficient performance and reliability for most workload
demands, with lower acquisition costs. The four- to eight-socket Unix/RISC server platforms will
continue to be targets for migration to x86. Verify that Unix RAS and scaling are no longer
significant benefits, compared with Linux. Create configuration and pricing matrices as comparison.
Most Linux expansion will continue to come from lower-end servers in the one- to four-socket
configurations; NUMA systems and eight-socket servers with high-performance file systems, such
as XFS, are also candidates for high-performance deployments. Linux virtualization (KVM)
performance has significantly improved with its integration as part of the Linux kernel. In addition,
improvements will arise during the next 12 to 24 months in high availability, containers, trace
diagnostics and improved input/output (I/O) performance, further narrowing the gap with Unix.

Page 70 of 93 Gartner, Inc. | G00251107

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

Users should consider Linux on x86 as a viable alternative to reduced instruction set computers
(RISCs), where Linux skills are already present in the IT organization. This includes DBMSs (within
the 10-terabyte range) used for OLTP and data warehouse deployments. For most applications on
servers in the four- to eight-socket range, Linux is a self-tuning and OS that requires little or no
intervention by data center engineering. All major vendors are building Linux into their integrated
and converged system architectures for such use cases as analytics, virtual desktop infrastructure,
OLTP, data warehouse and other functions. In addition, organizations with Unix skills should make
the transition to scalable Linux systems with relative ease.

Business Impact: The significant total cost of ownership (TCO) benefits of Linux will shrink as
manageability, availability and uptime support in SLAs will be charged extra, as part of subscription
support contracts. Back-end DBMSs and high-performance applications with low-latency or
consolidation requirements have a positive business impact. Users will be more likely to derive
benefits if databases and applications can be architected to benefit from horizontal scaling trends,
where applicable.

Benefit Rating: Moderate

Market Penetration: 5% to 20% of target audience

Maturity: Early mainstream

Sample Vendors: Fujitsu; HP; IBM; NEC; Unisys

Recommended Reading: "Deploy a Successful Linux ERP Platform Strategy When Leveraging
x86"

"Reducing Linux Maintenance Costs: The Do's and Don'ts"

"Impact of the New Generation of x86 on the Server Market"

"Midrange RISC/Unix: The New Frontier for x86 Servers"

Liquid Cooling
Analysis By: John R. Phelps

Definition: Liquid cooling uses a liquid, such as water or a refrigerant, rather than air, to cool the
data center and equipment. This allows the cooling solution to be brought closer to the heat source,
thus requiring less, if any, fan power. Liquid cooling can solve the high-density server-cooling
problem because liquid (conductive cooling) conducts more than 3,000 times as much heat as air
and requires less energy to do so, thereby allowing increased data center densities. Newer piping
technology means the probability of leaks is extremely low.

Position and Adoption Speed Justification: This technology is in deployment, with little fanfare.
With the ongoing power and cooling crisis facing data centers and the continued growth of high-
density servers, the in-row, in-rack and in-chassis liquid cooling solutions offered by various

Gartner, Inc. | G00251107 Page 71 of 93

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

vendors are being installed. A new trend is the movement away from raised floors through the use
of liquid cooling solutions. At the Gartner Data Center Conference in December 2012, the results of
electronic polling during a track session on data center facilities best practices showed that 32% of
the 150+ respondents used some form of liquid cooling in their data centers. The details showed
that 8% of the respondents used only liquid cooling for their data center (2% used water cooling
only, and 6% used refrigerant cooling only). It showed also that 24% used a combination of
computer room air conditioning (CRAC) and liquid cooling (15% used a combination of room and
water cooling, and 9% used a combination of refrigerant and room cooling). Based on these results,
which are similar to those from polls taken at the 2009, 2010 and 2011 Gartner Data Center
Conferences and client interactions, we have market penetration at 20% to 50% and have
advanced the position of this technology on the Hype Cycle.

User Advice: Implement a modular, zoned strategy for new data center build-outs, balancing the
areas that have traditional power and cooling requirements against areas that will be devoted to
high-density servers. Use liquid cooling, such as in-row and in-rack cooling solutions, and plumb
new data center space for liquid cooling to support them. Make sure the cost of adding piping in
the data center, if not already installed, is included in your analysis. Continue to monitor data center
liquid cooling solutions from manufacturers and technology vendors that improve overall cooling
efficiency and fit into the organization's data center configuration. Plan for the elimination of raised
floors in areas that will support liquid cooling solutions.

Business Impact: By placing a cooling solution near the heat load via liquid cooling, the cooling
solution's efficiency is greatly enhanced, and less fan power is needed, which will help power usage
and the carbon footprint. If the chilled liquid can be made available for point cooling systems, then
computer rooms that are reaching the limit of their traditional cooling (CRAC units and underfloor
plenum) capacities can continue to be leveraged. Allowing greater computing power in the same
footprint may help some facilities avoid the costs involved with a data center expansion.

Benefit Rating: High

Market Penetration: 20% to 50% of target audience

Maturity: Early mainstream

Sample Vendors: American Power Conversion; Emerson Network Power (Liebert); HP; IBM;
OptiCool Technologies; Rittal

Recommended Reading: "Do You Need a Raised Floor in Your Next Data Center?"

Entering the Plateau

Capacity on Demand (Mainframe)


Analysis By: John R. Phelps; Mike Chuba

Page 72 of 93 Gartner, Inc. | G00251107

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

Definition: Mainframe capacity on demand (COD) refers to the availability of inactive components
— processors, memory and input/output (I/O) adapters — in systems that can be rapidly activated.
It has five categories:

■ Replacement of failed components


■ Permanent upgrade of systems
■ Temporary upgrade to handle disaster recovery situations
■ Temporary upgrade to handle planned events, such as data center moves
■ Temporary upgrade to handle spikes in workload requirements

Position and Adoption Speed Justification: During the past decade, vendors have expanded the
range of COD capabilities. The mainframe vendors, especially IBM, have added extensive support,
compared with Unix and x86 vendors. Although Unix systems have added support of some of the
five categories listed above, the x86 space has little of this support. An increase in the number of
supported categories means more-sophisticated implementations. The dynamics and time required
to activate components will vary across implementations — for example, the automatic sparing of
failed processors in seconds versus the manual activation of a permanent processor upgrade,
which might take hours or a day or more, depending on how much has already been set up. Some
components (for example, memory) can only be added permanently, whereas other resources (for
example, processing units) can be added and taken away later.

Component technology and packaging technology are advancing so that, in many cases, it is easier
and less expensive (in the long run) to populate systems with standardized packages that include
extra, inactive components, which is especially true for mainframes. This will enable rapid
deployment of additional capacity with reduced upfront costs. The ability to turn temporary capacity
on and off is available from more vendors than ever; however, in many cases, hardware vendor
software licensing and independent software vendor (ISV) license handling of COD environments
have never been completely resolved.

Software vendor support for temporary COD is the major inhibitor to its general adoption, except
where the mainframe vendor has helped negotiate the software pricing as an intermediary for
temporary upgrades (for example, Unisys). This is the reason it is not already on the Plateau of
Productivity. IBM did not make any material changes to its COD offerings with the introduction of
the zEC12 in 2012.

User Advice: Examine the impact of software billing (hardware vendors and ISVs) on temporary
COD to see whether total savings can be achieved, compared with other methods. Assess the
premium pricing that comes from having extra components preinstalled and inactive, and how that
could affect savings. Compare the price of an activation, combined with the minimum time the
capacity can be activated, with the number of times and length of time needed during the projected
life of the system.

Next, compare that price with the price of purchasing the component. ISVs have begun to negotiate
the terms and conditions of temporary COD, but there is no uniformly accepted formula, because

Gartner, Inc. | G00251107 Page 73 of 93

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

users' needs and circumstances vary greatly. Users need to push hard to get acceptable contract
terms for software pricing. Pricing formulas may be straightforward; however, depending on the
number and timing of activations, they can cause users to avoid implementing temporary COD.
Examine all forms of COD, because some may apply to your situation, and some may not. Each
type of COD has different uses and costs.

Business Impact: COD has multiple areas of impact. The automatic sparing of failed components
at no extra cost provides enhanced system availability. The mainframe's ability to automatically
replace failed processors without the OS or applications being aware it has happened is a major,
high-availability differentiator from other platforms. The potential for easy and low-cost provisioning
of backup COD is enabling many businesses to consider providing in-house disaster sites, rather
than contracting with third-party providers. Sometimes, the ability to temporarily shift workloads to
other systems for a planned event, such as a data center move or reconfiguration, is important for
easily planning time frames without having to wait for a period of inactivity.

Applications exist across all industries in which transaction volumes vary widely over time, and have
temporary peaks in workload demand. If the need for temporary capacity is critical, the peaks are of
short duration and occurrences are rare, then temporary COD can provide a significant cost savings
with these types of applications. For users with extreme peaks and valleys of workload demand,
temporary COD is especially important on the mainframe platform, because the cost of processors
and software based on total capacity is higher than it is for other platforms.-

Benefit Rating: High

Market Penetration: More than 50% of target audience

Maturity: Mature mainstream

Sample Vendors: Bull; Fujitsu Technology Solutions; IBM; Unisys

Mission-Critical Workloads on Linux


Analysis By: George J. Weiss

Definition: This technology includes all the operations and critical foundations that enable
organizations to function 24/7, including all the necessary ecosystems. Linux must be accepted as
essential to an organization in every way if it is to accommodate complex, mission-critical
workloads, including effective support for databases, recovery, disaster tolerance, system
management, SLAs and dynamic resource allocation. Linux also must work well within the
conventional skill sets of mainstream IT departments.

Position and Adoption Speed Justification: The Linux kernel continues to make significant strides
in functionality, with three major vendors of Linux — Red Hat, SUSE and Oracle (in order of market
share) — announcing kernel release levels that incorporate functional improvements such as volume
management, checksums, snapshot, rollback capabilities, kernel tracing tools and containers (e.g.,
Red Hat RHEL 6.3, SUSE Enterprise Linux 11 Service Pack 2 and Oracle Unbreakable Kernel).
Third-party independent software vendors (ISVs) and OEMs continue to address high availability
(HA; see sample vendors). According to Gartner surveys, more than 60% of IT organizations

Page 74 of 93 Gartner, Inc. | G00251107

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

consider Linux their strategic choice for deploying database management systems (DBMSs) on x86
platforms. Effectively, Gartner believes that Unix for ERP has become a legacy installed base from
user dialogues. Over the past five years, Linux has been replacing Unix in increasingly demanding
workloads, including database management (for example, IBM DB2, Oracle Database and Oracle
MySQL), midtier applications (such as those from Oracle and SAP), and Web services and e-
commerce applications. Furthermore, SUSE Linux is the strategic OS for deploying SAP's Hana in-
memory configurations, as further validation of Linux in mission-critical applications (Red Hat
supports SAP business solutions). All these serve as reasons for the confidence in Linux as a
mission-critical environment as detailed in the 2012 Hype Cycle, including:

■ Scalability (to 1,024 cores logically)


■ Solid vendor support in technical services from platform and software vendors, such as Cisco,
Dell, Fujitsu, HP, IBM, NEC, Oracle, SAP and Unisys
■ Solid OS distribution support from Linux providers, such as Red Hat, Oracle, SUSE and
Canonical (Ubuntu)
■ A large ecosystem of third-party commercial and open-source software (OSS) that includes
mission-critical support

All server vendors offer mission-critical infrastructure for Linux: HP Serviceguard, Oracle Engineered
Systems (e.g., Oracle Exadata and Oracle Exalogic), IBM Linux on Power and System z are a few
examples. APAC suppliers such as Fujitsu, Hitachi and NEC also deliver high levels of reliability,
availability, serviceability (RAS) on their x86 servers.

Previously, users have understood that they must invest time and effort in configuration,
performance analysis, testing and deployment. These skills were acquired over a decade of working
with Unix, and adapted to Linux. In addition, the Linux-HA project Pacemaker includes work on
heartbeat, cluster resource management, monitoring and recovery, which make their way into the
Linux distribution stack. In addition, Linux distributors continue to offer ready-to-deploy HA
resiliency features. In addition, recent x86 processors contain many features in silicon to improve
RAS capabilities.

Significant kernel progress continues in scalability and reliability, as well as in storage and volume
management, over larger user populations. Improved performance continues to be demonstrated
on platforms from IBM, HP, Dell, Fujitsu, Oracle, NEC and other vendors. Kernel enhancements
have addressed improvements in scheduling, file system performance, memory management and
input/output (I/O), and have enabled Linux to scale in large symmetric multiprocessing (SMP) and
cluster configurations. Gartner believes that most users of ERP applications and DBMSs can
migrate to Linux with minimal exception (e.g., extremely high I/O demands). Previously, the DBMS
was considered optimally deployed on Unix.

While multisocket/multicore Xeon systems are emerging as alternatives to Unix, we do not expect
high deployments of higher than 8-socket SMP systems with high-density virtualization of scale-up
configurations. Users prefer low, entry-priced 1- and 2-socket servers for the low incremental costs
in clusters, racks or blade chassis. In addition, users continue to migrate applications from high-end

Gartner, Inc. | G00251107 Page 75 of 93

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

Unix systems to free up capacity on more expensive hardware, and to decrease the upgrade cycle
costs. We expect this migration momentum to slow as only the highest end, mission-critical Unix
remains. Some x86 platforms will move upscale into multisocket, multicore designs to replace
midrange Unix servers with some benefits of SMP designs as users gain more confidence in Linux
scalability and migrate applications. Linux's mission-critical position on the Hype Cycle moved
incrementally to reflect its continuing technological progress balanced against the market's
willingness to create a Linux-ready enterprise from top to bottom.

User Advice: Enterprises should continue to expand their use of Linux to take advantage of
industry standard hardware and the associated cost savings. If you have well-trained resources,
using Linux in mission-critical and highly available environments is acceptable. Investigate vendor-
based hardware and software support options.

Linux kernel maintainers and distributors have brought the performance and reliability of the kernel
and the overall system to near parity with Unix. Unix performance benefits are provided mainly by
richer-functionality platform vendors, such as HP, IBM and Oracle; however, these vendors also are
strong Linux platform suppliers, and Linux can run on reduced instruction set computer (RISC)
servers as well (which are of little market interest). Vendors such as HP, IBM and Oracle will lead
much of the go-to-market push around Linux mission-criticality, but at the same time will seek to
preserve their own Unix installed bases which could result in a partial slowing trend off Unix.

One significant driver toward mission-critical workload support has been third-party Linux-based
HA clustering technologies such as Oracle Real Application Clusters (RAC), HP Serviceguard,
Symantec and SIOS, in addition to Linux distributor HA products offered by Red Hat (Cluster Suite
and HA Add-On) and SUSE (through its SUSE Linux Enterprise High Availability Extension). Oracle
provides technical support for its database solutions, middleware, business applications and the
Oracle Linux OS, and is a frequent contributor to the kernel community. Other helpful factors
include SUSE's relationship with SAP to ensure Level 1 through Level 3 support of the OS, with
handoff transparency. Dell, HP, IBM, SUSE, SGI, Oracle and Red Hat offer Linux support, including
all-level support. Integration and support skills for large applications and database workloads on
64-bit processors and large memory configurations have been available for Linux in production.
Enhanced security is available as a feature through Security-Enhanced Linux (SELinux) and SUSE
AppArmor, supporting access control policies and mandatory access policies.

Users should continue to track Linux community developments of core dump, trace and diagnostic
utilities, as well as efforts by Linux distributors and third parties to improve and deliver on HA,
disaster recovery, event management and monitoring tools. An example would be to patch the
kernel without reboot for higher availability (e.g., Oracle offers this capability with Ksplice).

Business Impact: The business impact of this technology remains high. The primary advantages
are promises of lower total cost of ownership (TCO) and vendor flexibility. In organizations starting
out with Linux, these cost savings may be hidden initially until resource skills and competencies are
developed (less so if Unix skills exist). Most organizations report cost advantages, especially when
moving from Unix to x86 hardware, after attaining these skills and tools; however, complexity will
have a bearing on how large the savings are and whether any savings are gained. The greater level
of flexibility with a Linux platform from the number of available hardware choices can contribute to
lower TCO in hardware contract negotiations. Larger and more affordable configurations can be

Page 76 of 93 Gartner, Inc. | G00251107

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

achieved, with blades and clusters becoming more scalable in cores, thus acting in horizontal and
vertical capacity modes. In big data and analytics running on server cluster nodes, software
scheduling, monitoring and mobility algorithms at the server and storage device levels can remove a
failed cluster node and transfer workloads to neighboring nodes, thus providing capability similar to
fault tolerance (but not continuous availability).

Benefit Rating: High

Market Penetration: More than 50% of target audience

Maturity: Mature mainstream

Sample Vendors: Canonical; Dell; Fujitsu; HP; IBM; NEC; Oracle; Red Hat; SAP; SGI; Suse; Unisys

Recommended Reading: "Linux and Windows Surpass Unix for ERPApplication Server
Deployments"

"How to Effectively Deploy ERP Applications and DBMSs"

"Reducing Linux Maintenance Costs: The Do's and Don'ts"

"What's Important When Selecting an Enterprise Linux Distributor"

"HP Shifts to Offense With Its Mission-Critical x86 Server Blade Platform Strategy"

"The Future of Unix: Hazy and Overcast, So Reach for the Umbrella"

Blade Servers
Analysis By: Andrew Butler

Definition: Blades are small form-factor servers housed in a chassis that provides tightly integrated
power, cooling, input/output (I/O) connectivity and management capabilities, enabling the easy
addition of new components and the replacement of failed or outdated technology.

Position and Adoption Speed Justification: The market for blade servers has been constrained
since 2011, as cannibalization from multinode servers eroded the growth that blades were gaining
through increased market adoption of fabric-based infrastructure (FBI). Some hyperscale data
centers have avoided the blade route for modular technology to gain maximum cost savings,
preferring to invest in multinode servers instead. Based on Gartner's 2012 data, blades represent
about 12% of the total server market (down from 13% at the end of 2011) in units, and 21% in
revenue. The revenue share has barely changed since 2011. After a long period where HP and IBM
dominated the blade server market, the market has become more even with the 2009 entry of
Cisco. Despite being a newcomer to the server market, Cisco is carving out a strong share of the
blade server market, rivaling IBM in many geographies (and even rivaling HP in a few geographic
markets). HP, IBM and Cisco control 80% of the blade server market (based on 2012 shipments).
HP remains the volume market leader by far, with more than the revenue share of IBM and Cisco
combined. As blade servers form the foundation of most vendor FBI strategies, the market will

Gartner, Inc. | G00251107 Page 77 of 93

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

continue to attract investment from most server vendors, especially as Gartner believes that FBI
products have much higher profit margins and vendor lock-in than other x86 server businesses.

Most blades are deployed in data centers to host large numbers of small, centralized x86
workloads. However, HP, IBM and Oracle offer the ability for users to mix and match non-x86
blades that work alongside x86 blades in the same chassis. These are aimed at small or midsize
businesses (SMBs), as well as branch and departmental needs, as the market for non-x86 blades
continues to grow — albeit from a very low base. HP's latest generation of Unix-Itanium servers is
also completely leveraged from HP's c-Class blade technology, with the ability to deploy either
100% Itanium or hybrid Itanium/x86 systems. The adoption of blades as a foundation for server
virtualization and for private cloud solutions is also growing rapidly, as blade processor technology
and scaling increasingly rival most rack-optimized designs. This drives the desire for virtual I/O, and
most virtual I/O strategies are focused on the blade server market.

From a software perspective, x86 blades support industry-standard principles as much as any other
x86 platform. They run every combination of OS and stack, with no need for software affinity.
However, there are no real application advantages to compel users to deploy x86 blades instead of
more-conventional x86 servers. There are no multivendor standards in place for interoperability of
blades from different vendors, and the chassis and interconnect technology remain highly
proprietary.

The management ecosystem for blade servers is still maturing, and some vendors will invest heavily
to aid lock-in ability and introduce unique functionality to accelerate a company's willingness to
deploy it. Integrated systems based on blade technology increase the potential for lock-in even
more, as they typically enforce the usage of a limited menu of storage, networking and management
tooling options. Hence, a decision to deploy blades needs to be based on the facility or operational
benefits that blades can deliver, including dense packaging, simpler cabling, power and cooling
efficiencies, faster time to provision, resource pool asset deployment, user hardware provisioning
and replacement of failed parts.

User Advice: The use of blade servers for back-end data center functions — such as large
database or business intelligence (BI) applications — is a growing market for blades as new-
generation, four-socket blades deliver ever-increasing vertical scale. Blades are emerging as a
viable platform for some Hana deployments. Growing demand for integrated systems is also driving
new opportunities for blades. Meanwhile, blade servers are a viable choice for most Web and
application serving workloads, and growing cluster workloads like Hadoop, where the limited
scaling will rarely be an issue. Organizations should position blade solutions in the data center
based on track record, vendor proof points and application support. Make vendors commit to the
number of future generations of blades they expect to support in the existing chassis technology, as
swapping the chassis represents the most disruptive migration effort (especially if there is no
forward compatibility for the blades).

If blade servers fit within the power/cooling profile of their data centers, then organizations should
evaluate virtualization and blades as complementary strategies to mitigate data center space and
energy problems. Maximize the ROI of blades and mitigate vendor lock-in by ensuring that the
chosen chassis technology will remain strategic to the vendor for the longest possible time.

Page 78 of 93 Gartner, Inc. | G00251107

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

Challenge blade vendors to demonstrate viable chassis technology road maps and to underwrite
investment protection through contract commitments.

Blade vendors will employ different approaches to technology directions, such as CPU and memory
aggregation, embedding storage and network technologies, virtual I/O support and SMB/midmarket
positioning for blades. Evaluate vendor offerings according to how well their blade objectives are
targeted. As FBIs become more prevalent in the data center, this will extend the probability of lock-
in to storage, networking and management tools.

When considering blade solutions, ensure that required peripherals (such as specific switches for
established storage area networks [SANs]) are supported. Blade adoption frequently drives the
demand for virtual I/O, and vendor strategies used to deliver this capability will vary. The use of the
hardware vendor's management tools is important to effectively employing blades. Therefore, verify
the coexistence status between blade management tools and other tools, such as virtualization
control and OS management.

Business Impact: Blade servers provide increased operational efficiency, because of their fast
provisioning and deprovisioning features, as well as their speed and ease of replacement. This
enables new technology to be added at a more granular rate, and supports the easier replacement
of failed components and performance upgrades. Blade servers may save space, because of their
increased density and power-saving ability based on shared power, fans and other components in
the chassis. Because most vendors' integrated system strategies are based on the use of blades,
they become a good foundation for combined server, storage and networking solutions that are
integrated at the factory or channel level. Blades represent a stable and mature server form factor
for most workloads that are suitable for limited scaling.

Benefit Rating: Moderate

Market Penetration: 20% to 50% of target audience

Maturity: Mature mainstream

Sample Vendors: Bull; Cisco; Dell; Fujitsu; Hitachi; HP; Huawei; IBM; NEC; Oracle; SGI

Recommended Reading: "Magic Quadrant for Blade Servers"

Linux on System z
Analysis By: Mike Chuba; John R. Phelps

Definition: Linux on System z is the set of Linux distributions that has been ported to run on the
IBM z/Architecture. The most common deployment uses the Integrated Facility for Linux (IFL)
specialty engine(s) and the z/Virtual Machine (z/VM) virtualization software. IBM has had success
with Linux on System z, where the software pricing model is based on the number of cores,
regardless of platforms. This includes applications traditionally run on distributed architectures and
traditionally complex applications, such as database and transaction processing.

Gartner, Inc. | G00251107 Page 79 of 93

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

Position and Adoption Speed Justification: Linux on System z has been supported for more than
12 years and its growth continues to outpace the overall growth in millions of instructions per
second (MIPS) for mainframes. We believe that, during the past two years, Linux on System z has
grown from 19% to 21% of the IBM-installed MIPS, and that approximately 35% of IBM's
mainframe customers have at least one IFL installed. The bulk of the ongoing growth has slowly
shifted from adding first-time customers (including Linux-only mainframes) to repeat business
where the increased capacity of each succeeding generation of processors and the proven track
record are creating incremental opportunities within the Linux on System z installed base. The
number of Linux applications supported now exceeds 3,000. In 2012, IBM introduced a new
generation of mainframes, the zEC12, which offered approximately 25% more capacity for the IFL
than was offered on its predecessor z196 at the same price point of $55,000. Both SUSE and Red
Hat offer Linux distributions to run on the IFL under z/VM.

The application mix goes from simple infrastructure-type applications, such as Web, email and file/
print, to complex applications, such as SAP applications, Cognos Business Intelligence (BI) and
Oracle 10g Real Application Clusters (RAC) database applications. The use of the IBM System z
virtualization capabilities of logical partitioning and z/VM, along with Common Criteria evaluated
assurance level (EAL) 5+ security certification, enables large numbers of Linux systems to share a
single mainframe system with multiple, diverse workloads.

The high availability of hardware (dynamic processor sparing), potential cost savings (high levels of
consolidation and reduced software license charges based on fewer processors) and tight
integration with mainframe applications (with the use of IBM's HiperSockets internal network) are
driving growth.

User Advice: Mainframe users should examine the use of Linux on System z for most Linux
applications, especially those that communicate with mainframe applications or among themselves
via TCP/IP, or need data accessible on the mainframe. The use of HiperSockets can simplify and
secure the network connections (all internal) and can enhance performance (shortened TCP/IP
stack and memory-to-memory move). Where applications exist both on z/OS and Linux and offer
similar functionality, consider the cost savings that might be accrued by offloading work on the
mainframe to less expensive IFL specialty engines. The IFL will not affect the cost of legacy
software, and any work offloaded to the IFL will open up mainframe capacity for legacy applications
to grow. For users who want to keep the consolidated Linux workloads separated, a dedicated
System z Linux server (System z with only IFLs installed), called Enterprise Linux Server, is
available.

The two major Linux distributions available on System z come from SUSE and Red Hat. Our clients
tell us the decision regarding which to install boils down to the functions needed, availability and
quality of local support, and the desire to have a single vendor providing support across multiple
platforms. While each vendor claims to have the majority of users, Gartner believes that SUSE,
which at one time had greater than 90% of the users (before Red Hat supported System z), still
maintains a lead, at around 60% of the users, even though Red Hat has gained ground over the
past few years.

In general, do not consider engineering/scientific workloads for Linux on System z, and be wary of
other heavy compute-intensive workloads. Although the zEC12, z196 and z114 have enhanced

Page 80 of 93 Gartner, Inc. | G00251107

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

capabilities for compute-intensive workloads, users should carefully examine performance before
committing to these types of workloads. When looking at implementing Linux on System z versus
other platforms, consider the higher hardware availability and security of System z. A major benefit
of the System z platform is its ability to consolidate a large number of Linux virtual systems and run
them in parallel on a single server with fewer processors. This will lead to potential hardware
savings (depending on the number of Linux systems) and possible software price savings, due to
the fewer number of processors needed and how this may affect software license fees. You should
understand what applications you will need to have supported by independent software vendors
(ISVs); although the Linux on System z application portfolio is growing, it is still smaller compared
with the x86 platform. Also, understand that the mainframe is not usually the first to get new ports
of upgrades so verify with the ISVs as to what time frame you should expect to receive upgrades to
Linux applications running under System z. Support for System x blades running Linux within the
zEnterprise BladeCenter Extension (zBX) can provide an expanded set of applications, if they can
be integrated with mainframe applications and data and have the hardware managed as one.

Business Impact: There is a consolidation of front-end applications with back-end databases


through HiperSockets and high-availability offerings. Linux on System z enables major consolidation
of workloads that previously required separate servers, with associated hardware, software and
environmental savings in areas where large numbers of Linux systems can be combined. These
workloads can be consolidated on System z with little impact on mainframe legacy workloads. This
is because they run on IFL specialty engines and, therefore, do not use general-purpose processor
power and do not affect legacy software charges.

Benefit Rating: High

Market Penetration: 20% to 50% of target audience

Maturity: Mature mainstream

Sample Vendors: IBM; Red Hat; Suse

Recommended Reading: "Organizations Slow to Adopt IBM's zBX"

"Significant IBM zEC12 Pricing Will Impact Data Center Planning"

"Six Things You Should Know About IBM System z114 Pricing"

"Changes in IBM Mainframe Pricing and Terms and Conditions Will Impact End Users"

VM Hypervisor
Analysis By: Thomas J. Bittman; Philip Dawson

Definition: A virtual machine (VM) hypervisor enables the operation of multiple OS instances
concurrently on one physical server without using a general-purpose host OS for primary access to
the hardware. The VM hypervisor enables hardware resources to be allocated on a fractionalized
basis. Mainframe virtualization is not included, because it is already well-established and mature.

Gartner, Inc. | G00251107 Page 81 of 93

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

However, other non-x86-based hypervisors are included for such technologies as HP Integrity
Itanium, Oracle SPARC and IBM Power Systems.

Position and Adoption Speed Justification: Server virtualization is a mature technology, and the
competitive market is stabilizing. Gartner estimates that more than 60% of all x86 architecture
workloads are running in VMs on top of hypervisors, and all Global 500 companies are using
hypervisors to some extent for their x86 architecture servers. During the past few years, there have
been impressive changes in hardware and hypervisor technologies to improve the performance of
workloads with high input/output (I/O) requirements, which has led to a marked increase in the
virtualization of mission-critical ERP and related database and mail servers, including clustering and
high availability (HA) options. While VMware created the x86 market for server virtualization in 2001,
the market includes a number of strong and maturing competitors (see "Magic Quadrant for x86
Server Virtualization Infrastructure"). Hypervisors are available for free, with a fee for maintenance,
support and management tools.

User Advice: Hypervisor-based virtualization is a strategic technology that is the default in most
large enterprises, and is becoming the default in most midsize organizations, even though
containers are gaining ground as an option for Linux consolidation. Although hypervisors can be
licensed for free, there are still functionality and maturity differences among them, and most
vendors require their specific layered virtualization management tools (which are not free, except for
low-end management tools) to manage them. The appropriate management toolset is critical. A
special focus should be put on the ability to manage VM life cycles (to avoid or monitor VM sprawl,
and to understand and control offline creation) and large numbers of VMs. The tools that enable
management flexibility, deployment and live VM migration should also be a focus, as should the
related functions around HA/disaster recovery (DR) and storage dependencies.

Business Impact: Server virtualization technology reduces hardware costs through server
consolidation, and by increasing hardware utilization — but increases the software costs. It lowers
the barrier to entry for HA and DR solutions. Hypervisors also enable rapid server deployment, often
up to 30 times faster. This technology is an enabler for flexibility tools, such as live migration.
Almost all cloud infrastructure as a service (IaaS) providers still use hypervisors as a basic building
block used in conjunction with Linux Containers or alternatives, where necessary. The flexibility
enabled by hypervisors will give operations more freedom in workload hosting, and the potential to
migrate workloads in VM format to cloud service providers. In the long term, this technology will
become the basis for real-time infrastructures and private cloud computing services, and will enable
the flexibility to migrate from on-premises to off-premises sourcing.

Benefit Rating: High

Market Penetration: More than 50% of target audience

Maturity: Mature mainstream

Sample Vendors: Citrix; HP; IBM; Microsoft; Oracle; Red Hat; VMware

Recommended Reading: "Magic Quadrant for x86 Server Virtualization Infrastructure"

"Top Five Trends for x86 Server Virtualization"

Page 82 of 93 Gartner, Inc. | G00251107

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

"Use Sizing, Complexity and Availability Requirements to Determine Which Workloads to Virtualize"

Grid Computing Without Using Public Cloud Computers


Analysis By: Carl Claunch

Definition: Grid computing combines computers managed by more than one organization, whether
internal or external, to accomplish large tasks, such as derivative risk analyses or complex
simulations. The management domains can be separate companies, divisions, or different data
centers and operating organizations inside one company. The computers can be dedicated or can
contribute spare cycles to run grid work.

Position and Adoption Speed Justification: Grid computing is an extension of cluster computing.
It is used for large-scale computations in financial services and pharmaceutical organizations that
have appropriate applications, algorithms and new research processes. Grid application candidates
are computing-intensive, and can be parallelized so that parts of the processing can be done
across a distributed set of systems and can combine results in a central location. Electronics
companies, mechanical engineering firms and insurance companies are among the industries that
increasingly use grid computing. The use of grid computing is more common in universities and
national laboratories than in the private sector.

User Advice: Conceptually, grid computing could be used to help lower costs, or to increase the
efficiency of a fixed amount of work. More importantly, it can offer business advantages by
accomplishing what was infeasible with more-traditional approaches. Often, this means increasing
the accuracy of a model, producing results in an unprecedentedly short time and looking for
interactions earlier — for example, reducing the time it takes to search libraries of compounds as
drug candidates, or enabling a new business model.

When a business advantage can be gained from scaling up computing-intensive or data-intensive


processing, add grid computing to the list of potential implementation approaches. When the
objectives are mainly to reduce costs or improve efficiency, consider alternatives that are fully
mature and have no remaining issues to overcome.

Business Impact: Investment analysis, drug discovery, design simulation and verification, actuarial
modeling, crash simulation and extreme business intelligence tasks are areas where grid computing
may enable business advantages.

Benefit Rating: High

Market Penetration: More than 50% of target audience

Maturity: Mature mainstream

Sample Vendors: Digipede Technologies; HP; IBM; Microsoft; Univa

Gartner, Inc. | G00251107 Page 83 of 93

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

Mainframe Specialty Engines


Analysis By: John R. Phelps; Mike Chuba

Definition: Mainframe specialty engines are full-function general-purpose engines running at full
capacity and restricted to use in specific workloads. IBM offers three specialty engines: the
Integrated Facility for Linux (IFL) for Linux workloads, the System z Integrated Information Processor
(zIIP) for certain DB2 workloads and other information-intensive applications, and the System z
Application Assist Processor (zAAP) for Java processing and XML parsing.

Position and Adoption Speed Justification: After IBM announced support for zIIP and zAAP, it
has continued to expand the workloads that are eligible to use on zIIP specialty engines (some are
System z10 and zEnterprise only), such as XML parsing; IPsec network encryption/decryption
processing; most System Data Mover (SDM) processing associated with z/OS Global Mirror, z/OS
Common Information Model (CIM) Server and CIM Provider workload processing; and TCP/IP large
message processing for HiperSockets. IBM has also provided for more shared capacity by allowing
zAAP workloads to execute on zIIP engines on the System z10 and zEnterprise.

With the zEC12 announcement, no new specialty engines and no new functions for the existing
specialty engines were introduced, except that the z/OS v2.1 announcement added RMF
processing as eligible for zIIP processing. Although IBM may choose to add new functionality to
specialty engines in the future, the main part it played in this announcement was a statement of
direction that indicated the zEC12 would be the last high-end System z server to offer support for
the IBM zAAP. IBM will offer support for zAAP workloads on the IBM zIIP. For those users changing
over to all zIIPs in preparation for the dropping of zAAP support in the next generation, IBM
currently offers the conversion from zAAP to zIIP when doing an upgrade to the z196 or zEC12.
Starting in September 2012, IBM removed the restriction that prevents zAAP workloads from
running on a zIIP if a zAAP is also installed, to help with the transition. There is no additional charge
to do the conversion from zAAP to zIIP, as it is included in the upgrade fee for zIIPs and zAAPs
introduced with the z196 and continued with the zEC12.

Other software vendors offer products that have portions of their code eligible to run on the zIIP
engine. Unlike the IFLs, which do not use or depend on z/OS, zAAP and zIIP engines must be used
in conjunction with normal engines running z/OS. Because of this, zIIPs and zAAPs are sometimes
called z/OS specialty engines. A dedicated System z Linux server, equipped with IFLs only, is
named the IBM Enterprise Linux Server. Specialty engines allow IBM to stimulate the growth of new
workloads on mainframes by providing a discount on the new workloads while avoiding the impact
of the extra processor workload on software charges for other IBM and independent software
vendor (ISV) legacy workloads. The price of the specialty engines is significantly less than a general-
purpose engine, even though it is the same processor module. IBM uses the lower hardware pricing
and software costs, based on offloading workloads to specialty engines, as more targeted discount
mechanisms than simply dropping the price of the general-purpose engine.

The use and exploitation of System z offload or specialty engines have become commonplace in
IBM's larger mainframe installations. Specialty-engine million instructions per second (MIPS)
shipments (IFLs, zAAPs and zIIPs) continue to grow at a greater rate than that of traditional MIPS.

Page 84 of 93 Gartner, Inc. | G00251107

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

Although individual specialty engine types have not exceeded 50% penetration, the combination of
IFLs, zIIPS or zAAPs has led to a specialty engine penetration of greater than 50% of the market.

User Advice: Enterprises should understand that specialty engines are a strategic hardware
initiative for IBM to keep the cost of its mainframe platform competitive. The company continues to
enhance and evolve the mainframe for the benefit of all users. However, enterprises exclusively
running legacy workloads that cannot exploit specialty engines (or only exploit to a small degree)
will be challenged to see comparable levels of cost/performance improvement. Enterprises should
determine which workloads could exploit the specialty engines in order to realize savings. It is
possible to run the system without installing specialty engines and monitor how much of current
workloads could be offloaded if the specialty engines were installed. Software contracts with IBM
and ISVs should include clauses that lock in the current treatment of specialty-engine MIPS not
included in capacity-based pricing algorithms.

Organizations should not confuse specialty engines with the accelerator and appliance blades
found in the zEnterprise BladeCenter Extension (zBX). The zBX blades are not mainframe engines
and don't run z/OS or z/VM system code, as they are made up of Power or x86 architectures. The
z/OS specialty engines do not accelerate code; they simply provide more capacity at a lower price
by offloading work from general-purpose engines. One does not replace the other.

Business Impact: The use of specialty (offload) engines on the System z enables certain workloads
to run at a better price/performance than on traditional mainframe workloads. In many cases,
System z can become more price competitive with other platforms while maintaining traditional
mainframe attributes. This lets IBM attract new workloads at lower net prices. The part of the new
workload that runs on specialty engines (all Linux workloads on IFL, or part of the z/OS workload on
zIIP and zAAP) is not counted as part of the z/OS workload and does not permit the ISVs running on
z/OS to reap an undeserved license fee increase simply because the machine gets bigger. Because
these are specialty engines, they should be countable by ISVs only when their application runs on
the specialty engine; today (this could change in the future), they do not charge for zIIP and zAAP
MIPS. The use of the IFL specialty engine opens up the System z to a larger portfolio of newer
applications by enabling Linux applications to run on the mainframe in a highly consolidated
environment.

Benefit Rating: High

Market Penetration: More than 50% of target audience

Maturity: Mature mainstream

Sample Vendors: IBM

Recommended Reading:

IBM zEnterprise EC12 Offers Performance, Capacity and New Architectural Structures

Organizations Slow to Adopt IBM's zBX

Gartner, Inc. | G00251107 Page 85 of 93

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

"Six Things You Should Know About IBM System z114 Pricing"

"Changes in IBM Mainframe Pricing and Terms and Conditions Will Impact End Users"

Appendixes

Page 86 of 93 Gartner, Inc. | G00251107

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

Figure 3. Hype Cycle for Servers Technologies, 2012

expectations
V2P Server Management
Server Digital Power Module Management Cloud-Based Grid Computing
Fabric-Based Infrastructure
Processor Emulation
VM Energy Management Tools
High-Density Racks (>100 kW) Multicore Processors
Extreme Low-Energy Servers Mainframe Specialty Engines
Virtual Machine Resilience High-Performance Computing
Server Repurposing Clusters: Windows
Heterogeneous Architectures
Grid Computing Without Using
Server Power Capping Public Cloud Computers
Servers Using Flash Memory HPC-Optimized Capacity on Demand (Mainframe)
as Additional Memory Type Server Designs
Open Compute Skinless Servers Liquid Cooling
Instruction Set Virtualization Fabric-Based Blade Servers Linux on System z
Computing Advanced Server
Appliances Energy Monitoring Tools Virtual Machine Hypervisor
Mission-Critical Workloads
Linux on RISC on Linux
Server Provisioning
and Configuration Offload Engines
Quantum Computing Management Linux on Four- to 16-Socket Servers
Optical System Buses Capacity on Demand (Unix)
Data Center Container Solutions
FPGA Application Acceleration Shared OS Virtualization (Nonmainframe)
Power Adaptive Algorithms
P2V Server Management Server Virtual I/O
Digital Signal Processor x86 Servers With Eight Sockets
Acceleration Graphics Card Application Acceleration
(and Above)
As of July 2012
Peak of
Technology Trough of Plateau of
Inflated Slope of Enlightenment
Trigger Disillusionment Productivity
Expectations
time
Plateau will be reached in:
obsolete
less than 2 years 2 to 5 years 5 to 10 years more than 10 years before plateau

Source: Gartner (July 2012)

Gartner, Inc. | G00251107 Page 87 of 93

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

Hype Cycle Phases, Benefit Ratings and Maturity Levels

Page 88 of 93 Gartner, Inc. | G00251107

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

Table 1. Hype Cycle Phases

Phase Definition

Innovation Trigger A breakthrough, public demonstration, product launch or other event generates significant press and industry interest.

Peak of Inflated Ex- During this phase of overenthusiasm and unrealistic projections, a flurry of well-publicized activity by technology leaders results in some
pectations successes, but more failures, as the technology is pushed to its limits. The only enterprises making money are conference organizers and
magazine publishers.

Trough of Disillusion- Because the technology does not live up to its overinflated expectations, it rapidly becomes unfashionable. Media interest wanes, except
ment for a few cautionary tales.

Slope of Enlighten- Focused experimentation and solid hard work by an increasingly diverse range of organizations lead to a true understanding of the tech-
ment nology's applicability, risks and benefits. Commercial off-the-shelf methodologies and tools ease the development process.

Plateau of Productivity The real-world benefits of the technology are demonstrated and accepted. Tools and methodologies are increasingly stable as they enter
their second and third generations. Growing numbers of organizations feel comfortable with the reduced level of risk; the rapid growth
phase of adoption begins. Approximately 20% of the technology's target audience has adopted or is adopting the technology as it enters
this phase.

Years to Mainstream The time required for the technology to reach the Plateau of Productivity.
Adoption

Source: Gartner (July 2013)

Gartner, Inc. | G00251107 Page 89 of 93

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

Table 2. Benefit Ratings

Benefit Rating Definition

Transformational Enables new ways of doing business across industries that will result in major shifts in industry dynamics

High Enables new ways of performing horizontal or vertical processes that will result in significantly increased revenue or cost savings for an enter-
prise

Moderate Provides incremental improvements to established processes that will result in increased revenue or cost savings for an enterprise

Low Slightly improves processes (for example, improved user experience) that will be difficult to translate into increased revenue or cost savings

Source: Gartner (July 2013)

Page 90 of 93 Gartner, Inc. | G00251107

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

Table 3. Maturity Levels

Maturity Level Status Products/Vendors

Embryonic ■ In labs ■ None

Emerging ■ Commercialization by vendors ■ First generation


■ Pilots and deployments by industry leaders ■ High price
■ Much customization

Adolescent ■ Maturing technology capabilities and process understanding ■ Second generation


■ Uptake beyond early adopters ■ Less customization

Early mainstream ■ Proven technology ■ Third generation


■ Vendors, technology and adoption rapidly evolving ■ More out of box
■ Methodologies

Mature mainstream ■ Robust technology ■ Several dominant vendors


■ Not much evolution in vendors or technology

Legacy ■ Not appropriate for new developments ■ Maintenance revenue focus


■ Cost of migration constrains replacement

Obsolete ■ Rarely used ■ Used/resale market only

Source: Gartner (July 2013)

Gartner, Inc. | G00251107 Page 91 of 93

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

Recommended Reading
Some documents may not be available as part of your current Gartner subscription.

"Understanding Gartner's Hype Cycles"

"Agenda Overview for Data Center, Server, Storage, Network Infrastructure and Modernization,
2013"

"Cool Vendors in the Server Market, 2013"

"Magic Quadrant for Blade Servers"

"How IT Departments Must Change to Exploit Different Types of Appliances"

"How to Be Sure Fabric-Based Infrastructures Fit Your Needs"

More on This Topic


This is part of an in-depth collection of research. See the collection:

■ Gartner's Hype Cycle Special Report for 2013

Page 92 of 93 Gartner, Inc. | G00251107

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br


This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

GARTNER HEADQUARTERS

Corporate Headquarters
56 Top Gallant Road
Stamford, CT 06902-7700
USA
+1 203 964 0096

Regional Headquarters
AUSTRALIA
BRAZIL
JAPAN
UNITED KINGDOM

For a complete list of worldwide locations,


visit http://www.gartner.com/technology/about.jsp

© 2013 Gartner, Inc. and/or its affiliates. All rights reserved. Gartner is a registered trademark of Gartner, Inc. or its affiliates. This
publication may not be reproduced or distributed in any form without Gartner’s prior written permission. If you are authorized to access
this publication, your use of it is subject to the Usage Guidelines for Gartner Services posted on gartner.com. The information contained
in this publication has been obtained from sources believed to be reliable. Gartner disclaims all warranties as to the accuracy,
completeness or adequacy of such information and shall have no liability for errors, omissions or inadequacies in such information. This
publication consists of the opinions of Gartner’s research organization and should not be construed as statements of fact. The opinions
expressed herein are subject to change without notice. Although Gartner research may include a discussion of related legal issues,
Gartner does not provide legal advice or services and its research should not be construed or used as such. Gartner is a public company,
and its shareholders may include firms and funds that have financial interests in entities covered in Gartner research. Gartner’s Board of
Directors may include senior managers of these firms or funds. Gartner research is produced independently by its research organization
without input or influence from these firms, funds or their managers. For further information on the independence and integrity of Gartner
research, see “Guiding Principles on Independence and Objectivity.”

Gartner, Inc. | G00251107 Page 93 of 93

This research note is restricted to the personal use of 4254.gerson@bradesco.com.br

You might also like