Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

WHITE PAPER

The Modernization of
Storage Architectures

TABLE OF CONTENTS The advent of new storage technologies, combined with new business
Why a Modern Storage Architecture requirements, is driving IT leaders to evaluate ways to modernize their
Matters ............................................................. 1 IT infrastructures. This involves selecting the right storage architecture in
Evolving Architectures: order to thrive in this new era. Organizations therefore need to consider
Integrated Systems and Storage the critical capabilities that their businesses require and understand the
Area Networks .............................................. 2 limitations and opportunities that different architectures present. This
Five Critical Capabilities You paper examines storage architectures that are currently available and
Need to Consider for Your Storage compares and contrasts them based on such criteria as availability,
Architecture .................................................... 3 scalability, performance, extensibility, and manageability. Since major
#1 Availability ....................................... 3 differences exist between these storage architectures, it is critical
#2 Scalability ........................................ 5 to understand the strengths of each technology and, perhaps more
important, the weaknesses.
#3 Performance .................................. 5
#4 Extensibility .................................... 6
#5 Manageability ................................ 6
Why a Modern Storage and scalable storage architectures to
Architecture Matters achieve goals at a far quicker pace. At the
Aligning the Right Workloads with forefront of this change is flash-based
For much of the past two decades,
the Right Architecture ................................ 7 storage technology. It enables companies
storage architecture design has seen
Summary ........................................................ 8 little change, despite the explosion of to achieve breakthrough application
data and the evolution of traditional performance while reducing power and
About Brocade ............................................. 8
storage technologies. However, digital space requirements. Flash has been as
transformation has driven a modern disruptive to storage architectures as
storage transformation. A massive virtualization was to servers.
technologically enabled business shift At the same time, IT has to recognize
has seen the storage infrastructure that its role has changed. Not only must
evolve from its traditional role as a critical they support traditional workloads, but
business operations function to that of a IT must also support new and evolving
vital business enabler. workloads that emphasize different critical
As with the other waves of infrastructure capabilities from storage. Further, IT
innovation triggered by this shift, major cannot simply deploy the environment
developments are now happening within and forget about it. IT must be able
the storage landscape. New business to monitor commitments to a Service
application demands require more Level Agreement (SLA) in order to show
adaptive, intelligent, high-performing, that they are meeting the customer’s
performance requirements. This requires singular application need or requirement, under this classification: Hyper-Converged
a new level of instrumentation. In addition, such as Hyper-Converged Infrastructure Infrastructure (HCI) systems and Software-
IT must provide storage and workload (HCI), Software-Defined Storage (SDS) Defined Storage (SDS). Some may refer
enablement at the edge of the network, so stacks, and cloud. The outcome of the to these architectures as turnkey solutions,
that data for activities such as analytics, review, and the infrastructure you build or simple pre-engineered building blocks.
transactional processing, and distributed as a result, will have a massive impact on Others have tightly integrated components
data storage can be moved closer to your organization’s ability to meet its goals and management, while still others use
the line of business. How close will be today, and into the future. commodity-based components.
determined by the application.
Evolving Architectures: Infrastructure architects now have many
We continue to see the headlines about options when balancing high performance,
Integrated Systems and
how to eliminate Storage Area Networks agility, scale, availability, security, and
Storage Area Networks
(SANs) or “just say no” to SANs. This manageability. The unpredictability of new
Over the past few years, there has been
guidance, and perhaps perception, can be and unknown workloads adds another
massive storage innovation, as evidenced
very misleading. Any system that consists dimension to design considerations. Of
by new storage systems, capabilities, and
of more than one server inherently course, these integrated systems are still
protocols. These have all had an impact
depends upon a storage network. Whether compared to tried-and-true server, SAN,
on how established best practices are
that storage network is NFS, iSCSI on IP, and storage architecture.
maintained or, in some instances, not
SCSI on Fibre Channel, or NVMe on Fibre
maintained. There is no single approach Fundamentally, how data is created,
Channel, it is still a storage network. How
that can accommodate every application how much data is created, how data is
one decides which type of storage network
or workload in terms of performance, consumed, and the ultimate end goals
to deploy is driven by a number of factors.
availability, scalability, extensibility, and of data usage determine the storage
As we will discuss in this paper, not all
manageability. needs and storage access requirements.
architectures and not all applications
The prudent maxim is that not all data
are created equally. New storage Many of these new systems are unifying
is created equally; therefore, not all data
architectures are starting to be adopted compute, storage, and management
needs to be stored equally. And while
by organizations outside the realm of for a variety of workloads suited for
change can certainly be a good thing,
IT. Lines of business and other parts of non-critical applications. Different
it inevitably comes with costs—some
organizations are starting to evaluate architectural categories have emerged
obvious, some hidden.
pre-packaged architectures to serve a
No single architecture solves every
technical or business hurdle, because
each is designed to achieve different
outcomes. Looking at the capabilities of
Fibre Hyper- Software- each approach can help determine which
IP Storage Converged
Critical Capability Channel Converged Defined one is likely to deliver the best solution
iSCSI/NAS Infrastructure
Storage Infrastructure Storage
Availability      overall for specific workloads.
Scalability      Such a decision requires careful
Performance     
consideration and planning to balance
Agility     
Extensibility     
cost, performance, and efficiency. Design
Manageability      and solution decision errors can have
Security      a critical impact on the organization
Acquisition and
If these ratings are because application services may
Ownership Costs
 at odds with
 your existing
or future environment
 vision,

revise as appropriate be compromised, or even rendered
unavailable, and the cost of correcting
Figure 1: A comparison of critical capabilities based on storage network type. errors can be exorbitant. To avoid such

2
fates, it is important to understand the #1 Availability
designs available and what to expect Companies are continuing to architect The average cost of a data center
from them. Not fully understanding the for non-stop availability for large-scale, outage rose from $690,204 in 2013 to
$740,357 in 2015. The cost of downtime
architecture can lead to additional costs all-flash data centers. Non-stop business has increased 38 percent since 2010.
and issues down the road. requirements are driving the need to The bulk of these costs are from business
achieve six nines availability (just over 30 disruption, lost revenue, and the impact on
SANs strive to provide an always
end-user activity, respectively.
available, shared storage resource, which seconds of downtime in a year). Availability
must be considered at multiple areas (Source: Cost of Data Center Outages:
is accessible to all manner of servers
January 2016 [Ponemon Institute LLC])
with a variety of operating systems within the architecture, such as:
supporting mission-critical applications, •• When a company’s customer-facing
database workloads, and general-purpose application goes offline due to a failure
virtualized workloads. SANs, a de facto Fibre Channel SANs have always been
within the application infrastructure,
standard for most enterprises, optimize architected for availability by enabling
customers are apt to look elsewhere,
the data flow to and from storage and redundant infrastructure to mitigate any
probably to a competitor.
compute resources. Storage arrays disruptions or failures within application
•• Machine failure of mission-critical resources. The key point is that achieving
provide the flexibility to assign storage
computers involved in manufacturing, availability targets must be about more
to hosts from pools of available capacity,
retail, and banking can lead to material than simply investing in redundant
avoiding any wasted storage with the
resources and supplies running out, hardware. The data must meet availability
flexibility to add capacity on demand
missed schedules, failure to meet standards, too, which is why storage
without disruption.
contractual commitments, and financial architects design their SANs to provide
As data continues to expand, losses for shareholders. the highest possible availability and
organizations have become more
•• An inability to complete credit-card predictable performance over a Fibre
reliant on the capabilities of storage
transactions can cost thousands to Channel fabric.
systems and networks. SAN innovation
millions of dollars in lost business to In a SAN, all storage traffic for mission-
continues to enhance network services,
organizations reliant on Web-based critical applications and business-
improving availability, manageability, and
payment mechanisms. critical applications has a dedicated
performance. Brocade® SANs provide
fabric instrumentation to see the details of •• If a database application cannot reach Fibre Channel network connection to
a single IO, reassigning slow-drain devices data due to IO connection failures, communicate between the application
to better-suited paths, and automating seats on flights will not get sold and and the controllers within the arrays.
and optimizing virtualization traffic. hotels cannot make reservations, driving Investment in a dedicated network for
Such instrumentation provides IT with shoppers to look at alternative retail storage helps ensure IT organizations can
continuous feedback about the storage platforms. achieve SLAs. The result is that a single
flows of applications on an existing fabric. human error or another service will not
•• The opportunity cost of an outage also
bring down the application on both fabrics
varies based on the time of day, season,
Five Critical Capabilities You at any given time.
and event. In many instances the highest
Need to Consider for Your In contrast to the robust infrastructure
value times are also the highest system
Storage Architecture stress times. Critical applications should supporting a Fibre Channel SAN, HCI
There are a considerable number of and SDS systems universally share a
never go offline during these periods.
critical capabilities to contemplate when single Ethernet network for application
choosing the right storage strategy, some traffic, storage traffic between nodes, and
of the more important being availability, other virtualized compute services to
scalability, performance, extensibility,
and manageability. Let’s discuss each in
further detail.

3
transverse the shared network, such as transport network in the world since the latency, packet loss, and high retransmit
backup, VM migrations, replication, and standardization of the protocol. The Fibre rates (which further reduce network
management. Today, most organizations Channel protocol was designed from day performance). In an HCI environment,
do not have a dedicated storage network one to support critical storage traffic. The the network connections are almost
for IP storage workloads. This is due to implementation of buffer credits helps always configured as a “shared service”
two primary considerations. First, the with flow control, making sure data is environment. Application user access
model causes issues with the general never sent unless there is space for it. is across the same Ethernet ports as
management scheme for most network This feature also eliminates the possibility the expanded storage, replication, and
teams in which everything is viewed as a of dropped frames within the network. mirroring traffic. TCP was written to
single shared environment. Second, that Forward Error Correction (FEC) is built provide reliable data delivery in unreliable
dedicated resource, though necessary for into the standard to increase resiliency by networks, and it does: Data ultimately
both performance and reliability, is more automatically detecting and recovering gets to its destination. TCP was never
expensive. Ethernet networks have always network transmission errors. And to written to provide time-sensitive data in a
been designed as a shared network for ensure optical and signal integrity for deterministic fashion. There is no way to
the services they provide. So having a Fibre Channel optics and cables, Brocade know when data will finish arriving.
network dedicated for storage is not a has developed a technology using the
Consider, for example, re-transmits. The
typical requirement. A shared network will ClearLink Diagnostic Port (D_Port)
challenge of packet loss in an IP network
have an impact on overall availability when capability to quickly eliminate SFPs, patch
manifests as poor IO response times and
it comes to maintenance and achieving panels, and fiber runs as a point of failure,
slow throughput. TCP window sizes are
high availability, since deployment of keeping the application online and running
negatively affected by packet loss. On a
additional network capacity or deployment non-stop.
good day, you are doing a fraction of what
of a new application often requires
In contrast, storage over a general- the protocol can actually achieve because
network changes such as when to provide
purpose TCP/IP network lacks of re-transmits. This is typically referred
segmentation or simply connectivity
deterministic behavior and, by default, to as the “Ethernet penalty.” This is not
between segments. Each network
creates higher latency compared to an occasional circumstance. TCP by the
change becomes a potential source of
Fibre Channel due to inefficiencies in the nature of its session window negotiation
interruption or downtime; structured
protocol stack for storage traffic. In many will create occasional packet loss. This
change management processes mitigate
critical applications, deterministic latency results in a TCP slow-start algorithm
risk and impact when mistakes are made,
is a key element for performance even being initiated. That same slow-start
though the impact is a completely different
when the total transaction load is not very algorithm also occurs after idle periods
order of magnitude when errors are made
high, let alone extremely high (in many for an application. The consequence of
on the single network for a collapsed and
financial applications, for instance, it is this behavior results in intermittent (or
hyper-converged architecture compared
not enough that the application has high constant) performance degradation,
to a fully redundant SAN.
performance; it is also necessary for the especially when the intermittent root
Data reliability continues to be a topic of performance/latency to be repeatable). cause and resolution can be very hard—or
discussion for organizations, in particular Moreover, it is more complicated and impossible—to determine due to lack
they want to know: What happens when cumbersome to configure and manage a of visibility across layers of abstraction
data loss occurs while in transit or shared service network. Multiple services in a hyper-converged architecture. By
potentially when the cabling infrastructure being run in a shared network frequently comparison, a Fibre Channel SAN always
is outdated? How do they then maintain will have conflicting needs, as well as transmits at full speed, so long as buffer
availability in order to serve internal or conflicting performance cycles. When the credits are available to receive the data.
external customers? Fibre Channel SANs peaks of those cycles happen to overlap,
have been deemed the most reliable data the network will frequently experience

4
#2 Scalability new workload, you do not need to go buy is important to overall IT function in order
Organizations continue to evaluate how a specialized compute node or network to to get a better understanding of what
to scale rapidly, adjusting to the demands meet SLAs. is going on within the data path and to
of application owners and to the explosive help provide measurement and problem
Scale is handled differently in both HCI
growth in data that will continue to have an resolution.
and SDS environments. Adding more
impact on what architecture to choose. nodes adds both storage and compute Each architecture approach will have a
resources to the cluster. This is not an different way to move traffic throughout
occasional circumstance. More nodes the network. Legacy networking is still
require more licenses before bringing considered too slow for the modern data
83 percent of decision-makers say the
increasing number of applications is systems online, and licensing costs center. To take advantage of some of the
putting greater strain on the IP network. with VMware or Oracle continue to newer storage architectures, the customer
(Source: Why Smart Organizations be unexpected burdens, continuously must look at transitioning to 10 GbE,
Maximize Application Performance: 2016 unfolding as some of these architectures 25 GbE, 40 GbE, or 100 GbE. Most
[Vanson Bourne]) grow. Upgrading to newer, denser drives SANs are running at 8 Gbps, 16 Gbps, or
requires either new nodes or replacing even at 32 Gbps speeds, with 64 Gbps
drives in existing nodes. But how does and 128 Gbps on the horizon, which allow
The ability to accurately forecast growth the redistribution of data across newly the server side and storage side to take full
within the infrastructure is becoming more added nodes impact performance? advantage of the NVMe and flash storage
challenging as customers transition into HCI and SDS environments are not improvements. What is not discussed is
the digital world. Being able to predict immune to inefficiencies and costly the Ethernet penalty associated with these
when the next application will need to upgrades. Designing a one-size-fits-all speeds. Whereas with Fibre Channel,
be deployed, on what server farm, with architecture is hard to do and carries the 8 Gbps will deliver 100 percent of that
the right virtualization technology, with risk of potential scale issues within the full 8 Gbps, Ethernet—due to the lack of
the right network, or the right storage architecture itself. Not all applications are Virtual Channels, the lack of hardware-
is becoming increasingly difficult built the same, and different applications based trunking, and the inherent use of
for architects. This has spurred new have different requirements as they scale; TCP/IP—will typically be able to drive
infrastructure acquisition models and some will require more storage, while only 50 to 60 percent of that actual
architectures to help deploy applications others might require more compute interconnect’s achievable bandwidth. This
more quickly. Architecture scalability can resources. is why 8 Gbps Fibre Channel consistently
help determine the right solution or lead outperforms 10 Gbps Ethernet or FCoE.
#3 Performance
to possible challenges as application HCI or SDS clusters do not make optimal
Performance is one of the most discussed
environments grow. According to a recent storage arrays. Server-based storage
topics within organizations. Performance
451 Group survey, among the leading is bound by server hardware. These
SLAs mean something different to each
reasons for not adopting HCI and SDS servers send user traffic, server traffic, and
stakeholder within a project. Application
architectures was their lack of predictability storage traffic over the same shared IP
owners always want the best-performing
when it comes to scalability. network. The IP network is of paramount
servers, network, and storage. Multiple
This is one of the advantages of having conversations and finger-pointing importance in this type of environment.
the ability to scale compute and storage happen when performance issues occur. The critical question is: Can I manage the
resources independently. Adding compute Application, server, hypervisor, network, storage traffic independently from the
capacity or a new all-flash array means and storage teams will often take a non-storage traffic to get the performance
simply plugging it into the fabric and defensive stance to prove that they are not I need? The ability to architect, deploy,
provisioning resources. If you want to run a the root cause of the issue. Manageability and operate such an IP network in a

5
similar fashion to a SAN is key; yet, it A traditional three-tier storage architecture different architectures. The efficiency
is also largely untenable. While server- has always had an advantage over promised by HCI and SDS vendors
based storage will benefit from NVMe by other architectures when it comes to from integrating storage, compute, and
improving data communication latency, extensibility—whether it was having the network resources can be highly variable,
speeds, and throughput across fabrics, in ability to run multiple vendors within the and is highly dependent on the relative
the end, NVMe will be far more available same architecture, or taking advantage of complexity of the environment. HCI and
and perform to a much higher degree newer and older technologies within the SDS architectures use an abstraction layer
in a Fibre Channel SAN environment— same environment. HCI and SDS present to closely couple disparate components,
essentially, in a network designed from the hurdles to extensibility, since the nodes simplifying as much as possible the
ground up to do one thing exceptionally are not interchangeable within vendors. various synergistic management tasks.
well. So, once you go down a vendor’s path, However, it can be very hard and often
you are locked into that chosen vendor impossible to identify the root cause
It is not solely the ability to perform,
or must bring up a new environment with and pinpoint performance problems on
but also the abilities to diagnose and
a new vendor. Technology refresh and an HCI and SDS system because of
troubleshoot that determine overall
technology migrations are more complex the lack of visibility and troubleshooting
performance. Several SDS platforms
and costly. These types of environments instrumentation.
face significant challenges in this regard.
are notoriously challenging to get out of
When a complaint is made about the Why is this critical? Because if you cannot
once you are in them. With any vertical
performance of the logical device that measure it, you cannot manage it. In
stack or isolated piece of equipment, it
the application is accessing, the physical order for an IT organization to offer an
is challenging to move its processes or
or multiple physical devices underlying SLA to an application or business line
storage outside of that stack. These types
the logical device are hard to identify. owner, they must be able to measure
of moves tend to lock you in and are costly
Additionally, when a logical device is it. How else do they provide assurance
and time-consuming to change—think of
composed of elements of multiple that the SLA is being met? How do they
replacing your laptop or phone or cable
physical drives in different servers, the understand the usage of the environment
provider—now multiply that by 100.
latency response time for the various and its cost? Measurement is critical.
portions of the logical device are A key extensibility differentiator against Ethernet architectures, however, rely on
inconsistent. In most every workload, pre-defined and pre-configured systems sampling rates of one packet in hundreds
this is a problem: For workloads with is the flexibility of provider and technology, or thousands for their monitoring, and can
business importance, it is a big problem; and, sometimes, the economies of scale typically only query the management port
for mission-critical workloads, it is that enterprise purchasing can achieve. at a minimum 5-minute interval. Brocade
unforgiveable. When the answer is: “I have Components cannot be upgraded or Gen 5 and Gen 6 Fibre Channel SAN
no idea, let’s call our network provider for resized independently, and there is architectures measure every frame on
guidance,” you have not placed yourself in no choice regarding best-of-breed every port in the network without adding
a tenable position for running a business. technologies. If a new or different latency.
device has certain advantages over an
#4 Extensibility Storage flow visibility on Fibre Channel
existing HCI component, you will not be
In such a fast-evolving environment, networks has improved dramatically. In
able to implement what you need when
keeping one eye on the future is critical today’s Fibre Channel SANs, every frame
you need it.
to making the right infrastructure crossing every port can be measured
decisions. What is the next version of #5 Manageability without any performance impact. SANs
high-performance storage? Is it hybrid, Ease of management, alerting, and deep provide maximum application flow,
all-flash, or an NVMe architecture? You analytics are arguably the most important individual VM flow, and detailed IO
will want to make sure you do not need to elements of any highly functional network. visibility, and can also generate alerts
rip and replace. These tools vary greatly among the about slow-drain and misbehaving

6
devices. Any anomalies in a fabric at TPC-H benchmarks from Emulex/ when that storage environment crosses
are either flagged for further review Broadcom (see Figure 2). As storage functional boundaries. As an example, in
or quarantined to protect production. technologies continue to offer better and an iSCSI or NAS environment, the storage
Comprehensive flow information on better performance attributes, application team will need to query the network
distributed multitier applications across workloads will morph to achieve additional team to adjust parameters, deploy new
various compute nodes and arrays is functionality and scale. Historically, platforms, or troubleshoot issues. While in
critical to smooth operations. Additionally, applications have continued to increase an HCI or CI environment, the server team
the ability to validate optics and cable their performance and functionality will need to work through the network
plants, before bring-up or during every time the infrastructure capabilities team for similar issues around replication
troubleshooting efforts, saves time and improved. It is important to note that this and remote storage amounts. This is not
money. No other storage technology study only changed the switch fabric an inconsiderable burden, particularly
offers these features. and HBA infrastructure to achieve these when application owners are looking
improvements. The storage array (8 Gbps for a rapid resolution to a performance
Aligning the Right Workloads interface) and the actual application, server issue or outage. Given that almost every
with the Right Architecture CPU, and memory configurations were environment has a need for mission-
One of the most debated topics in the unchanged. critical storage for its Tier 1 applications,
industry is where to put applications and a significant portion of its Tier 2
Organizations with mission-critical
within the infrastructure. As you just applications, many customers have simply
workloads, which demand performance,
read earlier, each critical capability collapsed the lesser applications into
consistent latency, and unknown future
needs to be weighed against the right highly virtualized environments running in
scale requirements, will continue to
architecture choice. This will help the mission-critical storage fabric.
choose a SAN architecture. While
determine the best-suited architecture
organizations running virtualized, Workloads such as a Virtual Desktop
to place your application on to help
database, and structured workloads— Infrastructure (VDI) or analytics that have
the business achieve its overall goals.
such as OLTP, ERP, e-mail, SharePoint, minimal data change rate or transactional
According to leading analysts, mission-
gaming, Apache, Siebel, and financial value may be suitable for an HCI or SDS
critical applications and business-
applications—will continue to look for architecture.
critical applications will continue to be
these types of consistent capabilities.
deployed on Fibre Channel SANs for the Workloads based in a branch office or
foreseeable future. It is also important to consider that remote locations, as well as remote test
managing multiple storage domains (for and development apps, are suited for a
As an example of the impact that the
example: Fibre Channel SAN, iSCSI, NAS, packaged architecture that is small in scale
correct network infrastructure can
DAS) in the environment carries a cost requirements and lacks the performance
have on critical application workloads,
and workload burden as well, especially or high availability requirements, making
consider the following study, which looks
such workloads a good fit for HCI or SDS.

VDI Storage Data Data Storage Storage


Bootstorm vMotion Warehouse Warehouse XenMotion Migration
VDI Storage Data Warehouse Data Warehouse Storage Storage
“Bootstorm” vMotion Time DSS Query Time DSS Query Time XenMotion Time Migration
Time
800 150 3000 8000 100 100
600 2500 80 80
2000 6000
100
400 1500 60 60
4000
50 1000 40 40
200 2000 20 20
500
0 0 0 0 0 0

Lower Is better

Figure 2: TPC-H benchmarks from Emulex/Broadcom.

7
Summary different retailer. Banks and health care while the storage landscape is changing,
In the final analysis, customers must organizations are both good examples the current benchmark for enterprise
evaluate the needs of their application of little to no tolerance for application storage remains array-based, running
base to determine what the storage downtime. Some health care providers on a fast, reliable, predictable, and
architecture requires. Discussions of have even remarked that the outages highly available purpose-built storage
Recovery Point Objective (RPO) and are increasingly critical because the fabric. Business applications requiring
Recovery Time Objective (RTO) are not staff no longer remembers how to go exceptional availability, high-throughput,
so arcane and technical as they may back to paper. and ultra-low latency will continue
first appear. RPO comes down to a very to demand this solution architecture,
Dedicated storage networking should
simple question to ask of the application and it meets every technical demand.
be the default for every serious business
owner/business line: How much data Nevertheless, it may not be the most
interest. And of the technologies
can your application lose for you to still prudent for all environments. Storage
discussed, only a Fibre Channel SAN
be okay? A file and print share may have arrays and fabrics were originally
is expressly developed and architected
an RPO of a day, and, while people may developed and implemented to deliver
to meet the mission criticality of
be annoyed at the issue, the business all the capabilities set out above without
today’s business demands. That is
survives. A transactional application (such compromise, and it is easy to see why
why it is the first choice for business-
as payment, order entry, manufacturing they make the best architecture choice for
critical applications. But it is also why
compliance data) may very well have an nearly all applications.
many customers are choosing deeper
RPO of zero. The scale of how much virtualization stacks for the not-so-critical About Brocade
data can be lost from the application will applications and placing them in that same As the leading provider of storage
inform the user’s decision as to which infrastructure. Some applications may be networking solutions worldwide for
storage infrastructure they should select. able to live in varying lower performance, more than 20 years, supporting the
Similarly, the RTO is a discussion of the lower reliability infrastructures, but the mission-critical systems and business-
cost opportunity of the application being user should be very certain about the critical applications of most of the FTSE
off line. Certain retail customers have actual required SLA from the application/ 500, Brocade offers a range of storage
an extremely low tolerance for outages business line team before deciding to solutions for every organization.
on a seasonal basis. But in the 24/7 place them there.
world of application availability, those Learn more at www.brocade.com/en/
Storage architectures are driven by a possibilities/technology/storage-fabrics-
outage windows become increasing
wide variety of business and technical technology.html.
small. And even lack of performance on
requirements, and there are many
a site can cause customers to choose a
architectures from which to choose. But

Corporate Headquarters European Headquarters Asia Pacific Headquarters


San Jose, CA USA Geneva, Switzerland Singapore
T: +1-408-333-8000 T: +41-22-799-56-40 T: +65-6538-4700
info@brocade.com emea-info@brocade.com apac-info@brocade.com

© 2017 Brocade Communications Systems, Inc. All Rights Reserved. 04/17 GA-WP-6327-01
Brocade, the B-wing symbol, and MyBrocade are registered trademarks of Brocade Communications Systems, Inc., in the United
States and in other countries. Other brands, product names, or service names mentioned of Brocade Communications Systems, Inc.
are listed at www.brocade.com/en/legal/brocade-Legal-intellectual-property/brocade-legal-trademarks.html. Other marks may belong
to third parties.

Notice: This document is for informational purposes only and does not set forth any warranty, expressed or implied, concerning
any equipment, equipment feature, or service offered or to be offered by Brocade. Brocade reserves the right to make changes
to this document at any time, without notice, and assumes no responsibility for its use. This informational document describes
features that may not be currently available. Contact a Brocade sales office for information on feature and product availability.
Export of technical data contained in this document may require an export license from the United States government.

You might also like