Professional Documents
Culture Documents
Connectrix Fundamentals 2017_SRG
Connectrix Fundamentals 2017_SRG
Connectrix Fundamentals 2017_SRG
Click the Notes tab to view text that corresponds to the audio recording.
We might follow a cloud computing model and virtualize the entire IT infrastructure to meet our business
needs. However, there is a physical infrastructure that lies underneath the cloud.
A large enterprise may require thousands of VMs running on hundreds of physical servers. These servers
require connectivity to many storage arrays with thousands of terabytes of storage. Connections from
servers to storage must be high performing, reliable, easily managed, flexible, scalable, and secure. How
do we connect this large number of physical servers to this massive amount of physical storage and meet
the business requirements we have outlined?
The answer is: Dell EMC Connectrix products provide the solutions that meet these demanding business
needs.
Business Challenges
Connectrix products meet the demands of the modern data center. Here we
summarize typical business challenges and how the Dell EMC Connectrix
products provide solutions for each challenge.
Best Practice:
Dual SAN data paths
Modern data centers are required to handle more data with less resources.
Connectrix Solution:
High port density
More devices per unit
Driving efficiencies in space and power
Connectrix Solution:
Fibre Channel provides higher performance than Ethernet and TCP/IP
Fibre Channel speeds of 4, 8, 16 and 32 Gigabits per second
extra buffers available
Connectrix Solution:
Fabric virtualization and zoning ensure maintained separation among IT
equipment and departments while sharing equipment
Authentication tools also available ensuring rogue devices and switches are
not able to connect
A SAN is separate from the Local Area Network (LAN) that also connects to the hosts—not shown.
Storage devices connected to the Connectrix switches appear locally attached to heterogeneous operating
system hosts.
Here is a simplified example of two host sending one frame of data to a storage array through a switch.
The frame destination and source addresses enable a complete circuit from inbound port to outbound port.
After the frame is sent, the switch removes the circuit, freeing up the switch for other connections.
Connectrix switches perform millions of switching operations per second to provide connectivity between
devices attached to various switch ports.
A SAN is made of one or more interconnected switches that route data traffic from source
to destination. A fabric is a logically defined space in which the Fibre Channel nodes can
communicate with each other. The primary function of the fabric is to receive Fibre Channel
(FC) data frames from source ports and route them to destination ports. How the frames are
routed is based on the address identifier specified in each frame.
Switches are ideal for SMB, or small to medium business environments. Switches also have application
in the workgroup and at the edge of SANs in the enterprise. Switches are smaller than directors and have
a fixed number of ports. They provide scalability through Inter-switch Link (ISL) connections to other
switches and directors.
Directors are deployed in high availability and/or large-scale enterprises. Connectrix directors are
designed for the most demanding mission-critical environments. They can have more than a
hundred ports per device. Directors provide the highest availability using redundancy and built-in
failover for key components. Directors are highly scalable using modular port blades.
Directors are found at the core of large enterprise SANs, while switches are used at the SAN edge.
The edge tier is composed of small Connectrix Switches, and provides SAN connectivity to the host
servers.
The core of the fabric is where a Connectrix director, or directors, is located. The director has many
redundant components and is highly available. This high availability is why the director is placed at the
core of the fabric. In this particular topology, the storage arrays are connected directly to the fabric core.
SANs are normally built using either one vendor or the other vendor’s switches. Because of
incompatibilities between vendors, their products are usually not mixed in the same SAN.
Other storage vendors also sell the Brocade and Cisco switches, so the question may be
asked: “Why should a data center buy these switches from Dell EMC under the Connectrix
brand?”
The answer is Dell EMC provides a total solution with guaranteed interoperability. Dell
EMC has invested more than $3 billion to build its premiere interoperability lab known as
Dell EMC E-LAB. E-LAB tests thousands of products and millions of configurations.
Dell EMC supports every configuration qualified with no disclaimers and no excuses.
Dell EMC shares its proven interoperability knowledge through the E-Lab Navigator web
site, which can be found at the link shown.
This web site is the home of the Dell EMC Support Matrix—or ESM. E-LAB Navigator
allows for easy access to this extensive database with more than 9000 qualified products
and more than 10 million configurations supported. There are also Dell EMC simple support
matrices for individual storage, cloud, and data protection products.
E-Lab also publishes documentation such as TechBooks and best practices. In addition, the
E-LAB tests and supports unique or specialty configurations via Dell EMC’s RPQ exception
process.
Note: TechBooks compile use cases and deployment information on SAN extension, FCoE,
Fibre Channel topology, and WAN optimization to name a few.
The next few slides give more information about each of these common Connectrix use cases.
Connectrix is also required for connectivity in virtual storage environments. Here we show a VPLEX
system providing virtualization for volumes on the storage arrays.
Notice that the Connectrix switches provide all the connectivity between the hosts on the front end of the
VPLEX. Connectrix also provides connectivity on the back end of the VPLEX. Hosts use multipathing
software such as Dell EMC PowerPath for load balancing and availability.
The same Connectrix fabric used to store data locally can be used to send it to a remote site for business
continuance. Connectrix products can be configured with optional long-distance optics to handle Fibre
Channel distances of up to 40 km.
Data replication applications that take advantage of the Connectrix connectivity include array-based
replication (for example: VMAX SRDF), and appliance-based replication (for example: RecoverPoint).
Another Connectrix option is Fibre Channel routing. Sometimes we do not want remote site fabric
problems interfering with the local site. So instead of having one large fabric covering both sites, we have
a separate fabric at each site. We use Fibre Channel routing to enable communications from only selected
devices at each site. Connectrix also supports DWDM technology that can extend the Fibre Channel
distance between sites to 200 km.
A third Connectrix option is to use Fibre Channel over IP, or FCIP technology, to connect the sites. This
option is used if the sites are further apart than Fibre Channel distances, or if Fibre Channel is not
available between sites. FCIP protocol allows the Fibre Channel protocol to tunnel through the Wide Area
Network (WAN).
Zoning is a feature that allows communications between certain initiators and targets in the fabric, and
disallows all other communications.
Another method of providing security is to virtualize the physical switch hardware into separate virtual
switches. This method allows multiple groups to share common hardware, but remain logically separated.
Virtualization takes selected ports from the physical switch and puts them together to form separate logical
switches. This increases security because it allows us to take computing resources from different groups,
for example, and create a logical fabric for each group. Those logical fabrics act independently and cannot
interfere with one another.
Reliability: Dell EMC Connectrix offers unsurpassed reliability and performance. All Dell EMC Connectrix
products are Dell EMC E-LAB Proven.
Availability: Connectrix directors provide at least six 9s availability with redundant components such as
dual supervisors on the MDS-series and dual controllers on the B-series directors. Redundant fans and
power supplies are available as well.
Serviceability: Most components are hot swappable including SFPs, fans, power supplies,
supervisors/controllers, and so forth. All Connectrix products support nondisruptive firmware upgrades
(NDU).
Flexibility: You can easily introduce 16 Gbps or 32 Gbps Fibre Channel into your data center because of
backwards compatibility with slower speed equipment. Most SFPs support three data speeds. For
example, 16 Gbps SFPs also support 8 Gbps and 4 Gbps. 32 Gbps SFPs also support 16 Gbps and 8
Gbps speeds.
There is only one CPU and memory, so a failure in either requires switch replacement. Memory includes
both volatile RAM for working memory and nonvolatile NVRAM for saving firmware, logs and switch
configuration information.
Many switch designs use a single switching ASIC design. All ports are connected to the switching ASIC.
There are resources on the ASIC to support buffering. The ASIC also has hardware support for Fibre
Channel protocol and programmable access list logic.
If you are connecting to a switch with a multi-ASIC design, you should try to connect host/storage port
pairs on the same ASIC for best performance.
All but the low-end switches have N+1 power supplies and fans for redundancy.
Switches have a single serial management port for initial configuration and access by service personnel,
and an Ethernet management port for ongoing monitoring and switch management.
Some switches also have an optional USB port for downloading firmware and collecting logs.
Modular port blades are used to expand the capacity of the director. Small Connectrix directors have four
slots for I/O port modules and blades. The largest Connectrix director model has slots for up to 16 port
modules.
Many port modules have multiple switching ASICs. For best performance, we should always try to connect
host/Storage port pairs on the same ASIC, or on the same port module or blade. Most blades can do
internal switching and do not send local I/O to the backplane.
Dual switching modules provide redundant switching paths. These modules are made up of switching
ASICs or crossbars (XBAR).
Dual CPU modules or supervisor modules are included for fabric frame processing, director monitoring,
and management. These modules are redundant with one active and one in standby mode, ready to take
over if the active module fails.
Each has its own RAM for working memory and NVRAM for storing firmware, logs, and configuration
information. Each CPU module has its own management port interfaces and may also have a USB port.
The Connectrix family has several models of entry-level switches that may be used for small single-switch
SANs. Building a SAN from one of these entry level switches is simple and requires little Fibre Channel
knowledge. This solution is ideal where a few hosts are running applications that share block storage on a
small storage array. If the array does not have enough connections to accommodate all the hosts, a switch
solves the problem.
Dell EMC offers single switch “out-of-the-box” solutions that require minimal configuration. Both
Connectrix B-series and MDS-series products have entry-level models. An example of this type of switch
is the Connectrix DS-300B. It comes with an EZSwitchSetup wizard that automatically sets up the switch
ports and configures port-based zoning. The user simply plugs the hosts into the ports designated as host
ports, and the storage is plugged into the ports designated for storage. The EZSwitchSetup wizard does
the rest. These entry-level switches can also work as edge switches in bigger multi-switch SAN
topologies.
The edge tier, composed of small low-cost departmental switches, offers an inexpensive approach to add
more hosts into the fabric. The core or backbone tier usually consists of enterprise directors, which are of
higher cost and have higher availability. For the highest availability, we should have redundant edge
switches and core directors.
Dell EMC recommends that mirrored fabrics be identical for easier management and monitoring. Also,
multipathing software, such as PowerPath, is recommended on each host to manage load balancing and
failover.
An advantage of mirroring is that one side of the mirrored fabric can be brought down for maintenance or
upgrades, while the other side continues production operations.
Note: For simplicity, we do not show mirrored SAN fabrics in most of the graphics used in this course. But,
each SAN shown can and should be mirrored to achieve the highest availability.
Proper ISL design is critical for high performance and availability. In a poorly designed fabric, a single ISL
failure can cause the entire fabric to fail. An overloaded link can cause an I/O bottleneck; it is an
imperative to have enough ISLs to ensure adequate availability and accessibility.
If possible, avoid hops across ISLs for host-to-storage connectivity whenever performance requirements
are stringent. For redundancy, each connection between switches should have at least two ISLs.
The three media options that are available while implementing an ISL are Multimode ISL, Single-mode
ISL, and Dense Wavelength Division Multiplex (DWDM) ISL. Some variables that affect supportable
distances are propagation and dispersion losses, buffer-to-buffer credit, and optical power.
The table shows distances for different combinations of cables and transceivers. Shortwave SFPs with
multimode fiber have a maximum distance of 500 meters. Notice how the maximum distance decreases
as the speed increases.
For longer distances, longwave laser (LW, LWL) or extended longwave laser (ER, ELWL) over single-
mode 9-micron fiber optic cable is required. This solution is least susceptible to modal dispersion, enabling
distances between 10 km and 40 km, depending on the vendor and SFP type used. Remember that
adequate BB-Credit must be allocated to ports participating in long-distance connections.
For greater distances than shown here, Coarse Wavelength Division Multiplex / Dense Wavelength
Division Multiplex (CWDM or DWDM) may be used between switches. DWDM supports distances up to
3000 km.
Note: Distances over 200 km may require a Request for Price Quote (RPQ). An RPQ is a way to file a
request to qualify a certain configuration for support. The RPQ process lets Dell EMC technical experts
review and test a proposed solution that is outside the boundaries of previously qualified solutions.
The right-hand column shows the target code levels for each model. The target code is the minimum
revision of Fabric OS (FOS) that Dell EMC recommends.
There are currently three families of FOS that are supported on the B-series switches, 6.4.x, 7.x., and 8.x.
The newest generation-6 models are only supported using FOS 8.x or higher.
This course focuses on the newer switches that have not reached EOL. Details for older switches can be
found in various Connectrix B-Series documents such as the Hardware Reference Manual for each model.
Note: This table was generated using the two documents: EMC Hardware Release and Service Dates and
Target-Revisions and Adoption Rates, found at support.emc.com
The right-hand column shows the target code levels for each model. The target code is the minimum
revision of NX-OS that Dell EMC recommends should be running on a switch or director.
This course focuses on the newer switches that have not reached EOL. Details for older switches can be
found in various Connectrix MDS-Series documents such as the Hardware Reference Manual for each
model.
Note: This table was generated using the two documents: EMC Hardware Release and Service Dates and
Target-Revisions and Adoption Rates, found at support.emc.com
Zoning is closely associated with the fabric name server. Each device that connects to a Connectrix switch
is required to log into and register with the name server. Each device can then query the name server to
discover other devices. The zoning function controls which end devices may be reported when responding
to a query.
For example: The administrator only wants to allow an initiator, such as an HBA on a host, to
communicate with certain storage targets. This control is accomplished by placing the initiator and
authorized targets into an administrator defined zone for the fabric. All switches in the fabric have a copy
of the zoning information, and the connections are allowed/disallowed accordingly. A collection of zones
that can be activated throughout the fabric is called a Zone Configuration on Connectrix B-series and a
Zone Set on Connectrix MDS-series products.
The answer is yes. When a link between two switches comes online, the switches share information about
themselves and about other switches in the fabric. If the configuration settings of the two switches are not
compatible, the links between them will segment. A switch with segmented links exists in a fabric all by
itself and cannot logically connect with other switches. Therefore, it is important that all configuration
parameters between two switches are compatible before trying to connect those switches.
Especially important are switch domain ID and priority assignments. It is possible to add a new switch to a
fabric and cause major problems. If the new switch has a higher priority, it can affect which switch
becomes principal switch. If the new switch has a duplicate domain ID, it could cause switches to change
addressing for end devices.
For these reasons, careful planning is required before connecting a switch to a production fabric.
For example, virtual fabrics allow us to keep the Finance infrastructure separate from the Sales and
Engineering infrastructure of a company. Even though they share physical fabric, the logical fabric for
each group can not interfere with other groups.
For the Connectrix B-series switches, this feature is called Virtual Fabrics (VF). For Connectrix MDS-
series switches, this feature is called Virtual SAN (VSAN).
FCIP uses TCP/IP to transport the Fibre Channel frames across a LAN/WAN network. This solution
requires FCIP gateway switches that translate between Fibre Channel and the TCP/IP protocols.
One of the ways to achieve this goal is to interconnect geographically dispersed SANs through reliable,
high-speed links. This approach involves transporting Fibre Channel block data over existing IP
infrastructures, which may currently be used throughout the enterprise.
The FCIP protocol standard has rapidly gained acceptance as a manageable, cost-effective way to blend
the best of both worlds: Fibre Channel block data storage and widely deployed IP infrastructure. As a
result, organizations now have an excellent way to protect, store, and move their data while leveraging
existing technology investments.
This solution allows new high-performance hosts to attach to existing Fibre Channel storage. It reduces
the number of cables, I/O cards, and switches required. Current Dell EMC recommendations specify that
each host should have two CNAs for redundancy to process the FCoE frames.
FCoE may be implemented without Fibre channel switches. If both host and storage support FCoE, the
entire data path may be FCoE. FCoE is available on selected VNX and VMAX storage arrays, and
therefore does not necessarily require Fibre Channel switches.
FICON is a Fibre Channel level 4 protocol, similar to SCSI, that is proprietary to IBM. FICON is imbedded
in the Fibre Channel frames for transport.
IBM qualifies Fibre Channel Switches to operate in the FICON environment. There are several differences
in how Fibre Channel works in a FICON environment; for example, zoning is not used and device
discovery is different.
Verify both the switch model and the installed firmware on the switch have been qualified for use by IBM,
before implementing a FICON solution.
Note: Not all functions available through the CLI are available through the GUI.
A SAN administrator uses a browser to access the Web tools application on an individual Connectrix B-
Series switch.
CMCNE is used to securely discover devices, map and highlight connections, and manage zoning.
CMCNE is available in three editions: CMCNE Professional Edition, CMCNE Professional Plus
Edition, and CMCNE Enterprise Edition.
Note: Not all functions available through the CLI are available through the GUI.
Example: show module and show environment commands are shown here.
Device Manager is used to manage individual switches in the fabric . It simplifies the management of
MDS-series switches and fabrics with user friendly point and click interface. Device Manager does not
require a license for single switch management.
The Java interface for DCNM-SAN is the traditional way to manage your MDS-Series fabric, but beginning
with DCNM 10.0 it is being phased out.
The Web User Interface for DNCM is a tool that is suited to data center virtualization. It has been revised
in DCNM version 10 with a fresh new look and simplified menu navigation. Going forward, this Web User
Interface should become the major tool used for configuration, management, and reporting. It includes
dashboards, topology views, health and performance monitoring, configuration of zoning, and
administration.
Information technology professionals must meet continually evolving demands by re-designing their data
centers. Platform 2 designs are based on client/server topologies. New platform 3 designs provide for
social networking, cloud, mobility, and big data. Administrators are being asked to manage more data, with
fewer resources than ever before. Administrators need tools that can predict where resources such as
VMs and storage are needed. These tools must automatically move and provision resources, freeing up
administrators for other work.
Dell EMC ViPR SRM provides comprehensive monitoring, reporting, and analysis for heterogeneous
block, file, and virtualized storage environments. It enables you to visualize application-to-storage
dependencies, monitor, and analyze configurations and capacity growth, as well as optimize your
environment to improve return on investment.
ViPR SRM provides visibility into the physical and virtual relationships to ensure consistent service levels.
As you build your cloud infrastructure, ViPR SRM helps you ensure storage service levels while optimizing
IT resources—both being key attributes of successful cloud deployments.
Dell EMC ViPR SRM provides a topology view for validation and compliance. It also provides monitoring
and reporting for hosts, VMs, SAN ports, traffic utilization, and storage. ViPR is a switch vendor agnostic
and works with both Connectrix B-Series environments and Connectrix MDS-Series environments.
This screenshot is a sample of the files and folders created by running the Dell EMC Reports utility on a
Windows host. EMC Grab is a utility that is similar to EMC Reports, except that it is run on Unix hosts.
Notice that there is a different version for each host operating system.
This example is from support.emc.com where the correct version of the utility for a specific host operating
system can be downloaded.
This concludes the training. Proceed to the course assessment on the next slide.
This director has redundant CPU blades and Core Route blades. It has a 9 rack-unit (RU) form factor, and
supports up to 256 ports at full 16 Gbps speed. It has an eight-slot horizontal card cage, with four slots for
port blades. Hot swappable FRUs include port blades, optics, power supplies, and cooling fans.
This director has redundant CPU blades and Core Route blades. It has a 14 rack-unit (RU) form factor,
and supports up to 512 ports at full 16 Gbps speed. It has a 12-slot horizontal card cage, with eight slots
for port blades. Hot swappable FRUs include port blades, optics, power supplies, and cooling fans.
There are four slots for port blades in a chassis with an eight rack-unit (RU) form factor. Available port
blades include a 48-port Fibre Channel blade and a distance extension blade supporting both Fibre
Channel and FCIP. Fibre channel ports support 4 Gbps, 8 Gbps, 16 Gbps or 32 Gbps.
There are eight slots for port blades in a chassis with a 14 rack-unit (RU) form factor. Available port blades
include a 48-port Fibre Channel blade and a distance extension blade supporting both Fibre Channel and
FCIP. Fibre channel ports support 4 Gbps, 8 Gbps, 16 Gbps or 32 Gbps.
This switch has a one rack-unit (RU) form factor and dual hot-swappable power supplies. It also supports
ISL Trunking and integrated routing.
The MDS-9148S offers up to 48 autosensing 2/4/8/16 Gbps Fibre Channel ports. The switch can be
licensed in 12-port increments up to the full 48 ports. The MDS-9148S supports VSANs, Port Channels,
IVR, QoS, NPIV, and NPV.
The MDS-9396S comes with 48 or 96 ports activated in the base chassis. The 48-port model can be
upgraded to 96 active ports via port upgrade licenses, with 12-port increments.
MDS-9396S supports up to 500 buffer credits with the Base license and up to 4095 buffer credits per port
with the Enterprise license. Supported approximate distance for standard 2K FC frame size at 16 Gbps is
62 km with the Base license and 512 km with the Enterprise license.
In addition to SAN, MDS-9250i also supports dedicated IP storage networks. The SAN extension over IP
application package is enabled as standard on the two fixed Gigabit Ethernet IP storage service ports,
enabling features such as FCIP and compression on the switch without the need for extra licenses.
The MDS-9706 uses a nine Rack Unit (RU) form factor, and has four available slots for line cards with
either Fibre Channel ports, FCoE ports, or FCIP ports. Two slots are used for the redundant supervisor
modules.
The MDS-9706 provides redundancy on all major hardware components including the
supervisor and fabric modules and the power supplies.
Supported switching modules, or line cards, include both 16 Gbps and 32 Gbps 48-port Fibre Channel
modules, the 24-port x 40 Gbps FCoE module, the 48-port x 10 Gbps FCoE module. There is also a 24/10
SAN Extension module for FCIP.
The MDS-9710 uses a 14 Rack Unit (RU) form factor, and has eight available slots for line cards with
either Fibre Channel ports, FCoE ports or FCIP ports. Two slots are used for the redundant supervisor
modules.
The MDS-9710 provides redundancy on all major hardware components including the supervisor and
fabric modules and the power supplies.
Supported switching modules, or line cards, include both 16 Gb and 32 Gb 48-port Fibre Channel
modules, the 24-port x 40 Gbps FCoE module, the 48-port x 10 Gbps FCoE module. There is also a 24/10
SAN Extension module for FCIP.
The base model includes six fabric modules, two supervisor modules, three fan trays and 12 power
supplies. An extra four power supplies can be added to ensure the highest level of power availability.
Supported switching modules, or line cards, include both 16 Gb and 32 Gb 48-port Fibre Channel
modules, the 24-port x 40 Gbps FCoE module, the 48-port x 10 Gbps FCoE module. There is also a 24/10
SAN Extension module for FCIP.