Download as pdf or txt
Download as pdf or txt
You are on page 1of 42

This module introduces basic SAN components and concepts including host initiators, block

storage devices and multipathing. EMC VNX and VMAX storage arrays and Connectrix
products are also introduced along with the tools to manage them.

© Copyright 2017 Dell Inc. Module: SAN Management Overview 1


This course is focused on managing the SAN (Storage Area Network). However, it should
be noted that a SAN does not stand alone by itself. This module will first discuss host
considerations and software that affect SAN performance (such as multipathing software)
and then discuss EMC storage products and features. EMC Connectrix products are then
introduced and an overview of SAN management concepts is given.

© Copyright 2017 Dell Inc. Module: SAN Management Overview 2


This lesson covers host HBAs (initiators), block storage devices, and Multipathing software
with an emphasis on EMC PowerPath.

© Copyright 2017 Dell Inc. Module: SAN Management Overview 3


In modern data centers, Unix and Windows hosts communicate with external block storage
through the SCSI protocol. SCSI uses initiator/target communication, with an adapter in
the host usually acting as initiator and the controller on the storage device acting as the
target. There are many types of Host Adapters. In this course, we discuss the following
three types.

Host bus adapters (HBA) are hardware components installed in an open systems host to
access storage in a SAN. These drivers are responsible for encapsulating and un-
encapsulating SCSI-3 protocol (commands and data) within the payload of Fibre Channel
frames. In several environments, hosts also use HBAs to boot the OS from a SAN-attached
storage array. In this situation, the HBA has a special boot code on it to allow the host to
probe for SCSI disks during boot time.

iSCSI HBAs are ideal initiators for iSCSI connections to storage. The iSCSI HBA offloads
TCP/IP and iSCSI frame processing, reducing the strain on the host’s CPU. iSCSI can also
be supported using a regular NIC (Network Interface Controller) card by enabling iSCSI
initiator driver software within the operating system.

Converged Network Adapters (CNA) are intelligent multi-protocol adapters that


converge host LAN and Fibre Channel SAN connectivity over 10Gbps Ethernet. Fibre
Channel protocol is encapsulated within an Enhanced Ethernet network. They offer
unrivaled scalability and industry-leading virtualization support. The CNA has the
processing power to fully offload the processing of FCoE (Fibre Channel over Ethernet)
protocol, which reduces host CPU utilization for storage operations.

© Copyright 2017 Dell Inc. Module: SAN Management Overview 4


Block storage is raw storage that is accessed as a continuous series of small blocks
(typically 512 bytes each). To access raw block data on storage devices, the host must
provide the logical block address (LBA) of the beginning block of the stream of blocks, and
the number of consecutive blocks to be accessed. When files are fragmented, multiple I/O
operations are required to access the data. An enterprise operating system will usually
map the data blocks to a file system using some sort of LVM (Logical Volume Manager).
Applications then access files in the file system and have no idea of the underlying block
format. There are some applications (such as Oracle and SQL) that access block data
directly and do not require a file system provided by the operating system.

Modern operating systems can create and manage highly scalable file systems that can be
expanded to meet the increasing appetites of multiple applications. A volume that is
presented to an operating system can be one physical volume, or a virtual volume that
spans an array of disks (RAID - Redundant Array of Independent Disks).

Fibre Channel, iSCSI and FCoE protocols operate only with block storage, whereas NAS
(Network Attached Storage) only operates on file level storage.

© Copyright 2017 Dell Inc. Module: SAN Management Overview 5


File systems can reside on disks, disk partitions, or on a logical volume created by an LVM.
A file system organizes data in a structured hierarchical manner, via the use of files and
directories. Apart from files and directories, the file system is also made up of a number of
other structures, which are collectively called the Meta Data.

In Unix based systems the Meta Data consists of the Superblock, the Inodes and the list of
data blocks free and in use.

Windows file systems also have meta data, but use different terminology such as Master
File Table (MFT) in NTFS. The Meta Data of a file system has to be consistent for the file
system to be considered healthy.

Superblock – Contains important information about the file system: File system type,
creation/modification dates, size/layout of the file system, count of available resources and
a flag indicating the mount status of the file system. The Superblock maintains information
on the number of inodes allocated, in use, and free; and the number of data blocks
allocated, in use, and free. (This is set when the file system is created. The number of
Inodes allocated = File System Size divided by the number of bytes per I-node (NBPI)).
Each file or directory needs an inode. New files or directories cannot be created if there are
no free inodes.

Inodes – An Inode is associated with every file and directory, and has information about
file length, ownership, access privileges, time of last access/modification, number of links
and, finally, the block addresses for finding the location on the physical disk where the
actual data is stored.

The meta data of a File System is typically cached in the Hosts Memory Buffers. Host level
buffering is important to keep in mind in a VMAX Environment with SRDF and TimeFinder.
The information in a Host’s memory buffer is not available on the BCVs or the SRDF target
devices until they are flushed down to the standard devices.

© Copyright 2017 Dell Inc. Module: SAN Management Overview 6


Multipathing software is server-resident, and is used to increase I/O performance and
information availability. It provides multiple path automatic load balancing and path
failover.

Without a path failover software package, the loss of a path (dotted line) means one or
more applications may stop functioning. This can be caused by the loss of a Host Bus
Adapter, a port on the storage array or SAN switch, or a failed or unintentionally pulled
cable. A single path is a single point of failure. In the case of a path failure, all I/O that was
heading down the path is lost.

When multipathing software is used to make use of multiple paths, the loss of a path means
the load may be switched to use the remaining n-1 paths. This is called path failover. This
doesn’t mean that performance is not affected, but the applications continue to operate at
their best without down time.

Examples of multipath software include the native OS MPIO (Multi-path I/O) driver and EMC
PowerPath. EMC PowerPath is tuned to the unique capabilities of EMC storage arrays and
has more features and better performance than native MPIO drivers.

© Copyright 2017 Dell Inc. Module: SAN Management Overview 7


The application views each disk resource to be on a single path or channel. The administrator tries to
spread I/O load across all paths to storage. Each application is set up with its own storage. The
storage is allocated to an FA or SP (FA= FC Adapter on VMAX; SP=Storage Processors on VNX) based
on expected data requirements of the applications. This setup is done based on snapshot
measurements, guesstimates of average loading, and predictive loads.

The diagram on the left depicts a snapshot of the system at a moment in time. The amount of I/O
from each application or process is unbalanced. Host applications sitting on top of deep queues are
not getting the I/O performance they need due to congestion. If this was the average loading, the
System Administrator would reconfigure the system to balance the load better. In any system, there
will be points in time when the load is unbalanced due to one application receiving heavy I/O
requirements.

In the single path example shown on the left, two of the applications are currently causing high I/O
traffic. At this point, two channels are overloaded (depicted by the pending request stack) while two
other channels are lightly loaded. In a while, the requests will have been handled and the system will
return to a more balanced load. In the meantime, the applications are being “data starved” and the
users or applications are experiencing less than optimal performance.

With Multipathing software in the system, applications transparently access multipathing devices
instead of the SD (SCSI driver) devices. Multipathing allocates the requests across the available
channels, reducing bottlenecks and improving performance. The diagram on the right shows a similar
snapshot, with multipath software using multiple channels to minimize the queue depth on all
channels.

Since the FAs (or SPs) are writing to cache and not to disks, any Channel Director/Storage Processor
can handle any request. This allows multipathing to constantly tune the server to adjust to changing
loads from the applications running on the server. Multipathing does not manage the I/O queues; it
manages the placement of I/O requests in the queue.

Multipathing improves the performance of the server, enabling it to make better use of the storage.
This results in better application performance, less operational resources spent on the care and
feeding of the system, and more financial value from your server investment.

© Copyright 2017 Dell Inc. Module: SAN Management Overview 8


PowerPath software is a server-resident multipathing software solution from EMC. It
combines multiple path I/O capabilities, automatic load balancing, path failover, and logical
volume management functions into one integrated solution. PowerPath maximizes
application availability, optimizes performance, and automates online storage management,
while reducing complexity and cost all from one powerful data path management solution.
Some of the advantages of PowerPath are:
• Automatic: PowerPath algorithms allow the increase of application I/O rates through
VMAX and VNX, with automatic data path load balancing allowing for greatest
efficiency and throughput. PowerPath’s volume manager capability simplifies disk
administration tasks. It further reduces total cost of ownership through high-level
commands that hide storage complexity. It automatically manages workloads and
volume expansion.
• Non-disruptive: PowerPath provides users access to storage automation, seamless
data migration, and automatic import of volume groups, and simplifies the growth of
logical volumes while applications remain online. PowerPath optimizes server and data
path utilization by avoiding downtime.
• Optimized: By leveraging your server, SAN, and storage assets, PowerPath
maximizes your investment by increasing storage utilization.

© Copyright 2017 Dell Inc. Module: SAN Management Overview 9


This lesson covered host HBAs (initiators), block storage devices, and EMC PowerPath.

© Copyright 2017 Dell Inc. Module: SAN Management Overview 10


This lesson covers basic features of VNX, and VMAX storage arrays and their management
options.

© Copyright 2017 Dell Inc. Module: SAN Management Overview 11


In a SAN environment, a storage device usually acts as the target for host I/O operations.
Storage devices can either be disk volumes or tapes.

Disk volumes are the most popular storage medium used in modern computers for storing
and accessing data for performance-intensive, online applications. Disks support rapid
access to random data locations. This means that data can be written or retrieved quickly
for a large number of users or applications. In addition, disks have a large capacity.

Disks come in several types, and can be generally divided into two categories: High
Performance and High Capacity. Solid State Disks (SSD) or Enterprise Flash Drives (EFD)
are the highest performing drives available because they have no moving parts and don’t
have to wait for a physical disk to rotate to a particular position before accessing data. Fast
spinning drives with Fibre Channel (FC) or Serial Attached SCSI (SAS) are the next level
down in high performance. Serial ATA (SATA) and Near Line SAS (NL-SAS) are high
capacity drives that spin slower than high performance drives. High capacity drives have
replaced tape for most applications.

Tapes are still used sometimes for backup because of their relatively low cost. However,
tape has various limitations; data is stored on the tape linearly along the length of the tape.
Search and retrieval of data is done sequentially, invariably taking several seconds to
access the data. As a result, random data access is slow and time consuming. This limits
tapes as a viable option for applications that require real-time, rapid access to data. On a
tape drive, the read/write head touches the tape surface, so the tape degrades or wears out
after repeated use. The storage and retrieval requirements of data from tape, and the
overhead associated with managing tape media, are significant.

© Copyright 2017 Dell Inc. Module: SAN Management Overview 12


EMC offers several block storage options.
VPLEX is a next generation architecture that virtualizes storage array disks for data
mobility and information access.
XtremeIO is an all-flash (SSD) storage system. It supports Fibre Channel, iSCSI and
InfiniBand protocols.
VNX is a unified high performance storage array that supports both block and file
storage. It supports both iSCSI and Fibre Channel for block storage.
VMAX is a block storage platform optimized for cloud and mission critical storage
applications. It supports SSD (solid state [flash] drives as well as high performance and
high capacity disk drives.

Note: This course will use VNX and VMAX storage arrays in the lab exercises.

© Copyright 2017 Dell Inc. Module: SAN Management Overview 13


EMC VPLEX is a next generation architecture for data mobility and information access.

It is based on unique technology that combines scale out clustering and advanced data
caching, with the unique distributed cache coherence intelligence to deliver radically new
and improved approaches to storage management.

This architecture allows data to be accessed and shared between locations over distance via
a distributed federation of storage resources.

VPLEX with GeoSynchrony is open and heterogeneous, supporting both EMC storage
arrays and arrays from other storage vendors such as HDS, HP, and IBM. VPLEX conforms
to established world wide naming (WWN) guidelines used for zoning. VPLEX provides
storage federation for operating systems and applications that support clustered file
systems, including both physical and virtual server environments with VMware ESX and
Microsoft Hyper-V. VPLEX supports network fabrics from Brocade and Cisco.

© Copyright 2017 Dell Inc. Module: SAN Management Overview 14


XtremIO is an array that was designed from the ground up to be an all-flash storage
system. The internal architecture and management software allows administrators to easily
provision storage with just a few clicks. Performance is automatically optimized.

© Copyright 2017 Dell Inc. Module: SAN Management Overview 15


The EMC unified storage systems are grouped into two different series, the VNXe and VNX
series. The VNXe series includes the VNXe3150, VNXe3200, and VNXe3300 which is both a
File and iSCSI block solution. The VNX series include the VNX5200 (which is FC block only),
VNX5400, VNX5600, VNX5800, VNX7600 and VNX8000. Unified storage platforms combine
Block array and File serving components into a single Unified Block and File, File only, or
Block only storage solution. The VNX series implements a modular architecture concurrently
supporting native NAS, iSCSI, Fibre Channel and FCoE protocols for host connectivity. The
high end systems (VNX5800 through VNX8000) utilize Storage Processor Enclosure (SPE)
architecture and the mid-range models utilize Disk Processor Enclosure (DPE) architecture.

Models VNXe 3200 and the VNX 5200 and above employ the MCx™ architecture. MCx
stands for Multicore Optimization. This high performance architecture is applied in VNX
arrays as Multicore RAID, Multicore Cache, and Multicore FAST Cache. In conjunction with
MCx, also included are enhanced hardware, increased memory and number of processor
cores, 4 TB 7200 RPM drives as well as the availability of FLASH-based SSD drives. Please
see the Sales Resource Center and Partner Portal for more information.

© Copyright 2017 Dell Inc. Module: SAN Management Overview 16


Physical Disks are held in DAEs (Disk Array Enclosures) within the array and are
subsequently combined into RAID Groups. A RAID Group is a set of disks that are usually
bound together for the purpose of providing some type of recovery from disk failure. A
volume is a portion of a RAID Group that is made available to the client as a logical disk and
is referred to by its logical unit number (LUN). LUNs allow users to subdivide their RAID
Groups into convenient sizes for host usage. With a Traditional LUN, all of the space on it is
allocated for usage at the time of its creation.

© Copyright 2017 Dell Inc. Module: SAN Management Overview 17


A RAID group is a set of disks (up to 16 in a group) with the same capacity and
redundancy, on which you create one or more traditional LUNs. A RAID 6 group must have
an even number of disks with a minimum number of four. A RAID 5 group must include at
least three disks. A RAID 3 group must include five or nine disks, and a RAID 1/0 group
must have an even number of disks. The storage-system model determines the number of
RAID groups that it can support.

All the capacity in the group is available to the server. Any RAID Group should consist of all
SAS or all Flash Drives but not a mix of SAS and Flash Drives. Most RAID types can be
expanded with the exception of RAID 1, 3, 6, and Hot spares. Most RAID types can be
defragmented to reclaim gaps in the RAID group, with the exception of RAID 6.

© Copyright 2017 Dell Inc. Module: SAN Management Overview 18


Pools may include Solid-State-Disk (SSD), also called Flash drives, for the extreme
performance tier, SAS drives for the performance tier and Near-Line SAS drives for the
capacity tier. Pools support both thick and thin LUNs, as well as support for features such
as FAST (Fully Automated Storage Tiering) which enables the hottest data to be stored on
the highest performing drives without administrator intervention.

Pools are recommended because they give the administrator maximum flexibility and are
easiest to manage.

A Pool is somewhat analogous to a RAID group. However, a Pool can contain a few disks or
hundreds of disks, whereas RAID groups are limited to 16 disks.

Pools are simple to create because they require only three user inputs:
• Pool Name
• Resources (Number of disks)
• Protection level: RAID 5 or 6

Pools are flexible. They can consist of any supported disk drives. Arrays can contain one or
many pools per storage system. The smallest pool size is three drives for RAID 5 and four
drives for RAID 6.

Note: EMC recommends a minimum of five drives for RAID 5 and eight drives for RAID 6.

Pools are also easy to modify. You can expand the pool size by adding drives to the pool,
and contract the pool size by removing drives from the pool.

© Copyright 2017 Dell Inc. Module: SAN Management Overview 19


Unisphere is web-based software that allows you to configure, administer, and monitor
VNX series. Unisphere provides the user with an overall view of what is happening in your
environment plus an intuitive and easier way to manage EMC unified storage.

© Copyright 2017 Dell Inc. Module: SAN Management Overview 20


There are two ways to use the CLI for the VNX Series Platform:
• The Control Station is a customized Linux kernel and operates VNX for file
management services that configure, manage, and monitor Blades. A second Control
Station may also be present in some models for redundancy. If VNX for File or Unified
is present, you can connect to it via serial or SSH to troubleshoot many VNX for File
hardware components.
• If VNX for Block is present, the Navisphere Secure CLI can be used. It is a client
application that allows simple operations on the EMC VNX Series platform, and some
other legacy storage systems. It uses the Navisphere 6.X security model, which
includes role-based management auditing of all user change requests, management
data protected with SSL, and centralized user account management.

© Copyright 2017 Dell Inc. Module: SAN Management Overview 21


The VMAX3 family is offered in three models – the VMAX 100K, 200K and 400K. These
are designed for mission critical cloud storage and incorporate EMC’s new Dynamic Virtual
Matrix architecture and the HYPERMAX OS. All VMAX3 arrays are designed for cloud-scale to
deliver unprecedented performance and the lowest latency.

The VMAX 100K is designed to be a new, lower entry point to the VMAX line, delivering
VMAX performance and capabilities in a smaller, more affordable footprint. The VMAX 100K
features an aggressive entry price, delivers solid performance, and offers the full suite of
rich data services available with all VMAX3 models. The VMAX 100K will be attractive to cost
conscious customers that require mission-critical storage for smaller data sets or contained
workloads.

The VMAX 200K is designed to meet the majority of our existing customer’s performance
and storage needs. The VMAX 200K delivers exceptional value for customers requiring very
high performance, complete VMAX3 functionality, and more usable capacity. The VMAX
200K can offer the same or better capacity and performance as most of the previous
generation VMAX 10K, 20K, and 40K configurations being sold, with a much smaller
footprint.

The VMAX 400K achieves new levels of performance and capacity not available in the
previous VMAX product line, and is not found in any competitive array. The VMAX 400K is
the new industry leader delivering unmatched performance and cloud-scale. Customers can
start small with 1 engine and grow over time to 8 engines (16 linear feet ). The VMAX 400K
is the new gold standard for high-end, mission-critical storage for hyper consolidation.

All three platforms are built on the industry-leading Dynamic Virtual Matrix Architecture and
run the same Hypermax OS code – which replaces the previous generation Enginuity code.

© Copyright 2017 Dell Inc. Module: SAN Management Overview 22


Virtual Provisioning is a feature available on VMAX systems. Virtual Provisioning presents
an application with more capacity than is physically allocated. In some situations it may
provide a more efficient way of allocating capacity for applications that are somewhat
predictable in capacity growth patterns.

TDEVs (thin devices) or thin volumes can improve capacity utilization because an entire
physical volume is not used. A virtual volume is created from a common pool (thin pool).
The pool is shared by many TDEVs. Only the amount of physical space needed to store the
data is allocated to the TDEV.

In the example illustrated, the host has a 100 GB TDEV. When the TDEV is first created,
there is no data, so only 20 GB of physical space is allocated. As data is stored on the
TDEV, more space is allocated as needed. Physical space allocation is done automatically in
the background by EMC software on the array.

© Copyright 2017 Dell Inc. Module: SAN Management Overview 23


Beginning with Enginuity 5876, EMC Unisphere for VMAX is the management console for
the VMAX family of arrays. With the release of VMAX 3 (5977) a new version of Unisphere
for VMAX was released. Also named UNIVMAX, it offers big-button navigation and
streamlined operations to simplify and reduce the time required to manage EMC arrays.
Unisphere for VMAX uses the same framework as the VNX family, and provides users with
the same EMC standard look and feel.

© Copyright 2017 Dell Inc. Module: SAN Management Overview 24


This lesson covered an introduction of VNX and VMAX storage arrays and their management
options.

© Copyright 2017 Dell Inc. Module: SAN Management Overview 25


This lesson covers the different SAN connectivity options.

© Copyright 2017 Dell Inc. Module: SAN Management Overview 26


Physically, a Fibre Channel SAN can be implemented using a single Fibre Channel
switch/director, or a network of interconnected Fibre Channel switches and directors. The
HBAs on each host, and the FC ports on each storage array, must be cabled to ports on the
FC switches or directors. Fibre Channel can use either copper or optics as the physical
medium for the interconnect. All modern SAN implementations use fibre optic cables.

An IP SAN solution uses conventional networking gear, such as Gigabit Ethernet (GigE)
switches, host NICs, and network cables. This eliminates the need for special purpose FC
switches, Fibre Channel HBAs, and fibre optic cables. Such a solution becomes possible with
storage arrays that can natively support iSCSI, via GigE ports on their front-end directors
(VMAX) or on their SPs (VNX). For performance reasons, it is typically recommended that a
dedicated LAN be used to isolate storage network traffic from regular, corporate LAN traffic.

Converged Networking is made possible by CEE (Converged Enhanced Ethernet or Data


Center Bridging - DCB) and Fibre Channel over Ethernet (FCoE). FCoE is a new technology
protocol, defined by the T11 standards committee. It expands FC into the Ethernet
environment. Basically FCoE allows Fibre Channel frames to be encapsulated within
Ethernet frames. This provides a transport protocol that is more efficient than TCP/IP
sharing a single, integrated infrastructure, and thereby reduces network complexities in the
data center. FCoE consolidates both SANs and Ethernet traffic onto one Converged Network
Adapter (CNA), eliminating the need to use separate Host Bus Adapters (HBAs) and
Network Interface Cards (NICs). From the connectivity layer perspective, the use of Fibre
Channel Forwarders (FCF) is necessary to service login requests and provide the FC services
typically associated with a FC switch. FCFs may also optionally de-encapsulate FC frames
that are coming from the CNA and going to the SAN and encapsulate FC frames that are
coming from the SAN to the CNA.

© Copyright 2017 Dell Inc. Module: SAN Management Overview 27


EMC offers a complete range of SAN connectivity products, under the Connectrix brand, to
meet data center needs and build the infrastructure needed for today’s cloud environments.
Connectrix products are supplied by two different vendors: B-Series products from Brocade
and MDS-Series products from Cisco. Both Connectrix families include Departmental
switches, and Enterprise Directors.

Departmental switches are used in smaller environments. These switches don’t have
some of the hardware redundancy that is built into the larger Director-class switches. But
SANs using departmental switches can be designed to tolerate the failure of any one switch,
by including multiple paths from hosts to storage. Departmental switches are ideal for
workgroup or mid-tier environments. The disadvantage of departmental switches are lower
number of ports and limited scalability. If for example, we tried to build a large SAN,
consisting of a thousand or more ports, using only departmental switches, many ports
would be used up, just to connect the switches to one another. This leaves fewer ports for
hosts and storage connections. Also the complexity and management burden of
interconnecting so many switches prohibits large-scale implementations.

Enterprise Directors are deployed in High Availability and/or large scale environments.
Connectrix Directors can have hundreds of ports in a single chassis. This solution easily
scales to SANs of a thousand or more ports using ISLs to connect multiple directors
together. Directors provide added flexibility because they are built with blades (or modules)
that can be added and mixed in various configurations to meet the needs of the data
center. Directors have a higher cost and larger footprint than departmental switches.

Multi-purpose switches, blades and modules are available to support other protocols
besides Fibre Channel such as FCoE for converged networks and FCIP for distance extension
solutions across a LAN/WAN.

© Copyright 2017 Dell Inc. Module: SAN Management Overview 28


Connectrix B-Series 16Gbps models are shown here. Switches include the DS-6520B with
up to 96 16Gbps ports, the DS-6510B with up to 48 ports, the DS-6500B with up to 24
ports and the MP-7840B with 24 16Gbps Fibre Channel ports and 16 10GbE ports, plus 2
40GbE ports for FCIP.

Directors include the ED-DCX8510-4B with four slots available for port blades, providing
up to 192 16Gbps Fibre Channel Ports; and the ED-DCX8510-8B with 8 port blade slots,
providing up to 384 ports. Both director models come with high availability features such
as redundant, hot-swappable FRUs, including blades, power supplies, blowers, and WWN
cards.

© Copyright 2017 Dell Inc. Module: SAN Management Overview 29


If we go to support.emc.com and do a search for B-Series products, we will see all of the
models shown here. These are the currently supported B-Series models. Many of these
models have reached end-of-life (EOL), meaning they are no longer offered for sale by
EMC. However the EOL models are found in data centers throughout the world, and EMC
continues to support them until they reach end-of-service-life (EOSL), which is normally
five years after EOL.

The right-hand column shows the target code levels for each model. The target code is the
minimum revision of FOS (Fabric OS) that EMC recommends should be running on a switch.
There are currently two families of FOS that are supported on B-series switches, 6.4.x and
7.x. Most newer switch models are only supported using FOS 7.x or higher.

This course will focus on the newer switches that have not reached EOL. Details for older
switches can be found in various Connectrix B-Series documents such as the Hardware
Reference Manual for each model.

© Copyright 2017 Dell Inc. Module: SAN Management Overview 30


Here we show the Connectrix MDS-Series 16Gbps models. Switches include the MDS-
9250i with 40 16Gbps Fibre Channel ports and 10 10GbE ports (2 ports are reserved for
FCIP; the other 8 ports can be used for FCIP, FCoE, or iSCSI). The MDS-9148S is a switch
with up to 48 16Gbps Fibre Channel ports.

Directors include the MDS-9706 with 6 modules and up to 192 ports (4 slots are for port
modules), and the MDS-9710 with 10 modules and up to 384 ports (8 slots are for port
modules). These directors are highly available with dual supervisor modules and redundant
fans, power supplies and fabric (switching) modules.

© Copyright 2017 Dell Inc. Module: SAN Management Overview 31


If we go to support.emc.com and do a search for MDS-Series products, we will see all of
the models shown here. These are the currently supported MDS-Series models. Many of
these models have reached end-of-life (EOL), meaning they are no longer offered for sale
by EMC. However the EOL models are found in data centers throughout the world, and EMC
continues to support them until they reach end-of-service-life (EOSL), which is normally five
years after EOL.

The right-hand column shows the target code levels for each model. The target code is the
minimum revision of NX-OS that EMC recommends should be running on a switch. There
are currently two families of NX-OS that are supported on MDS-series switches, 5.2(x) and
6.x. Newer switch models are only supported using NX-OS 6.2.x or higher.

© Copyright 2017 Dell Inc. Module: SAN Management Overview 32


This lesson covered the different SAN connectivity options.

© Copyright 2017 Dell Inc. Module: SAN Management Overview 33


This lesson covers the different tools that may be used to manage Storage Area Networks.

© Copyright 2017 Dell Inc. Module: SAN Management Overview 34


The SAN Implementation engineer sets up the initial configuration of a SAN and establishes
baseline metrics for acceptable performance. A SAN administrator has the job of updating
the configuration and monitoring the storage area network. When new applications, hosts
or storage are brought online, new zones must be created to permit connectivity of newly
provisioned resources. Changing conditions, such as when VMs are migrated from one host
to another, or when applications create a heavier load as they grow and expand in the
cloud, require the administrator to monitor the SAN for performance bottlenecks and other
issues that might arise. SAN admins must have tools that allow them to be proactive in
identifying potential problems, as well as tools that provision SAN resources.

© Copyright 2017 Dell Inc. Module: SAN Management Overview 35


Fibre Channel switch vendors have each created vendor specific tools to manage and
monitor storage area networks. These include tools for performing switch management
functions such as enabling and disabling ports, configuring virtual fabrics, zoning, security,
monitoring for errors and fabric alerts, and detecting performance issues. These
management tools come as both command line interfaces (CLI) that can be scripted to
automatically perform certain tasks, as well as GUI (Graphical User Interface) tools that
give an interactive windows-type user experience.

© Copyright 2017 Dell Inc. Module: SAN Management Overview 36


Connectrix B-Series switches have vendor-specific management software developed by
Brocade. These tools include a Fabric OS (FOS) command line interface, as well as two
GUI’s: Web Tools and CMCNE (Connectrix Manager Converged Network Edition).

These tools are used to install, maintain, configure, monitor, and manage Connectrix B-
Series switches. Not all functions available through the CLI are available through the GUI.

These tools will be covered in the Module: B-Series Switch Tools.

© Copyright 2017 Dell Inc. Module: SAN Management Overview 37


Connectrix MDS-Series switches have vendor-specific management software developed by
Cisco. These tools include a NX-OS command line interface, as well as two GUI’s: Device
Manager and Cisco Prime Data Center Network Manager for SAN (DCNM-SAN).

These tools are used to install, maintain, configure, monitor, and manage Connectrix MDS-
Series switches. Not all functions available through the CLI are available through the GUI.

These tools will be covered in the Module: MDS-Series Switch Tools.

© Copyright 2017 Dell Inc. Module: SAN Management Overview 38


Virtualization enables businesses of all sizes to simplify management, control costs, and
guarantee uptime. However, virtualized environments also add layers of complexity to the
IT infrastructure that reduce visibility and can complicate the management of storage
resources. One problem in the modern data center is the exponential growth of data, and
the storage and SANs that provide the underlying infrastructure for that data.

To meet continually evolving demands, data centers are moving from platform 2 designs
that are based on client/server topologies, to platform 3 designs that provide for social
networking, cloud, mobility, and big data. Administrators are being asked to manage more
and more data, with fewer resources than ever before.

This means that administrators need tools that can predict where resources such as VMs
and storage will be needed, and then automatically move and provision those resources and
also automatically provide connectivity through the network and the SAN, so that
administrators may be relieved of performing these routine tasks.

EMC ViPR SRM provides comprehensive monitoring, reporting, and analysis for
heterogeneous block, file, and virtualized storage environments. It enables you to visualize
application-to-storage dependencies, monitor and analyze configurations and capacity
growth, as well as optimize your environment to improve return on investment.

ViPR SRM addresses these layers by providing visibility into the physical and virtual
relationships to ensure consistent service levels. As you build out your cloud infrastructure,
ViPR SRM helps you ensure storage service levels while optimizing IT resources — both key
attributes of successful cloud deployments.

EMC ViPR SRM provides a topology view for validation and compliance, and also provides
monitoring and reporting for hosts, VMs, SAN ports, traffic utilization, and storage. ViPR is
switch vendor agnostic.

© Copyright 2017 Dell Inc. Module: SAN Management Overview 39


ViPR adopts existing SAN zoning provisioning functions offered by Connectrix into broader
data management automation flows including block storage provisioning (from the array,
through the SAN to the end hosts), protection, and migration.

ViPR integrates with Connectrix B-Series fabrics by invoking the Storage Management
Initiative Specification (SMI-S) API of Connectrix Manager Converged Network Edition
(CMCNE). ViPR integrates with MDS-Series switches through SSH (Secure Shell) by
directly using the API of the switches.

ViPR does automated SAN zoning as part of the orchestrated storage provisioning process;
but it does not entirely manage Connectrix switches.

Connectrix management tools provide extensive management capability and detailed


reporting information about the SAN and they can be used to enable alerts about possible
error conditions.

So even though ViPR provides automation of basic SAN provisioning functions, SAN
administrators still need the native Connectrix management tools because they provide a
level of detail and control that is not possible with non-vendor-specific tools.

© Copyright 2017 Dell Inc. Module: SAN Management Overview 40


This lesson covered different tools that may be used to manage storage area networks.

© Copyright 2017 Dell Inc. Module: SAN Management Overview 41


This module covered the tools used for managing Connectrix switches as well as hosts,
initiators and their disk partitions. Also an overview of different storage arrays and
Connectrix products.

© Copyright 2017 Dell Inc. Module: SAN Management Overview 42

You might also like