Download as pdf or txt
Download as pdf or txt
You are on page 1of 59

Welcome to Fibre Channel for Connectrix.

Click the Notes tab to view text that corresponds to the audio recording.

Click the Resources tab to download a PDF version of this eLearning.

Copyright © 2017 Dell Inc. or its subsidiaries. All Rights Reserved. Dell, EMC, and other trademarks are trademarks
of Dell Inc. or its subsidiaries. Other trademarks may be the property of their respective owners. Published in the
USA.

THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” DELL EMC MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO
THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR
PURPOSE.

Use, copying, and distribution of any DELL EMC software described in this publication requires an applicable software license. The trademarks, logos, and service marks
(collectively "Trademarks") appearing in this publication are the property of DELL EMC Corporation and other parties. Nothing contained in this publication should be construed
as granting any license or right to use any Trademark without the prior written permission of the party that owns the Trademark.

AccessAnywhere Access Logix, AdvantEdge, AlphaStor, AppSync ApplicationXtender, ArchiveXtender, Atmos, Authentica, Authentic Problems, Automated Resource Manager,
AutoStart, AutoSwap, AVALONidm, Avamar, Aveksa, Bus-Tech, Captiva, Catalog Solution, C-Clip, Celerra, Celerra Replicator, Centera, CenterStage, CentraStar, EMC
CertTracker. CIO Connect, ClaimPack, ClaimsEditor, Claralert ,CLARiiON, ClientPak, CloudArray, Codebook Correlation Technology, Common Information Model, Compuset,
Compute Anywhere, Configuration Intelligence, Configuresoft, Connectrix, Constellation Computing, CoprHD, EMC ControlCenter, CopyCross, CopyPoint, CX, DataBridge ,
Data Protection Suite. Data Protection Advisor, DBClassify, DD Boost, Dantz, DatabaseXtender, Data Domain, Direct Matrix Architecture, DiskXtender, DiskXtender 2000, DLS
ECO, Document Sciences, Documentum, DR Anywhere, DSSD, ECS, elnput, E-Lab, Elastic Cloud Storage, EmailXaminer, EmailXtender , EMC Centera, EMC ControlCenter,
EMC LifeLine, EMCTV, Enginuity, EPFM. eRoom, Event Explorer, FAST, FarPoint, FirstPass, FLARE, FormWare, Geosynchrony, Global File Virtualization, Graphic
Visualization, Greenplum, HighRoad, HomeBase, Illuminator , InfoArchive, InfoMover, Infoscape, Infra, InputAccel, InputAccel Express, Invista, Ionix, Isilon, ISIS,Kazeon, EMC
LifeLine, Mainframe Appliance for Storage, Mainframe Data Library, Max Retriever, MCx, MediaStor , Metro, MetroPoint, MirrorView, Mozy, Multi-Band
Deduplication,Navisphere, Netstorage, NetWitness, NetWorker, EMC OnCourse, OnRack, OpenScale, Petrocloud, PixTools, Powerlink, PowerPath, PowerSnap, ProSphere,
ProtectEverywhere, ProtectPoint, EMC Proven, EMC Proven Professional, QuickScan, RAPIDPath, EMC RecoverPoint, Rainfinity, RepliCare, RepliStor, ResourcePak,
Retrospect, RSA, the RSA logo, SafeLine, SAN Advisor, SAN Copy, SAN Manager, ScaleIO Smarts, Silver Trail, EMC Snap, SnapImage, SnapSure, SnapView, SourceOne,
SRDF, EMC Storage Administrator, StorageScope, SupportMate, SymmAPI, SymmEnabler, Symmetrix, Symmetrix DMX, Symmetrix VMAX, TimeFinder, TwinStrata, UltraFlex,
UltraPoint, UltraScale, Unisphere, Universal Data Consistency, Vblock, VCE. Velocity, Viewlets, ViPR, Virtual Matrix, Virtual Matrix Architecture, Virtual Provisioning, Virtualize
Everything, Compromise Nothing, Virtuent, VMAX, VMAXe, VNX, VNXe, Voyence, VPLEX, VSAM-Assist, VSAM I/O PLUS, VSET, VSPEX, Watch4net, WebXtender, xPression,
xPresso, Xtrem, XtremCache, XtremSF, XtremSW, XtremIO, YottaYotta, Zero-Friction Enterprise Storage.

Revision Date: June 2017

Revision Number: MR-1WP-FCFORCTX.1.0.1.0

Copyright © 2017 Dell Inc. Fibre Channel for Connectrix 1


This course covers basic Fibre Channel concepts required to design, implement, manage, and
troubleshoot a Dell EMC Connectrix SAN solution.

Copyright © 2017 Dell Inc. Fibre Channel for Connectrix 2


This module focuses on layered Fibre Channel concepts and addressing.

Copyright © 2017 Dell Inc. Fibre Channel for Connectrix 3


This lesson covers the Fibre Channel layered model and describes standards defined in Levels FC-0
through FC-4. The three Fibre Channel topologies are also discussed.

Copyright © 2017 Dell Inc. Fibre Channel for Connectrix 4


Fibre Channel is the major networking protocol used in enterprise Storage Area Networks or SANs. It is
also the main protocol implemented by Dell EMC Connectrix switches.

Fibre Channel was influenced by the work done on the OSI 7-layer model, and it also has a layered
structure. However the layers are called levels in the Fibre Channel standard, and consist of levels FC-0
through FC-4 plus undefined Upper Layer Protocols.

Fibre Channel is a standard managed by the International Committee for Information Technology
Standards (INCITS) T11 technical committee.

No FC-3 common services are deployed in production SANs, so that layer is not discussed.

Upper layer protocols include storage volume managers, file systems, and the SCSI-3 protocol.

The FC-4 level maps between SCSI-3 operations and Fibre Channel Protocol, or FCP. If the ULP is
FICON, then the FC-4 level maps between the FICON operations and the Fibre Channel protocol.

The FC-2 level is concerned with encapsulating and de-encapsulating Fibre Channel frames.

The FC-1 level encodes and decodes bytes of information and inserts and removes extra characters that
are used for timing, flow control, demarcation, and so forth.

The FC-0 level defines physical interface characteristics.

Copyright © 2017 Dell Inc. Fibre Channel for Connectrix 5


Layered architecture allows us to replace lower layers in the protocol stack while keeping the upper layers
and the applications that rely on them unchanged. For example, what if we want to implement Fibre
Channel over IP protocol (FCIP)?

We keep the ULP and FC-4 layers, and only have to change the lower transport layers of the protocol
stack. In this example, we change to TCP/IP and Ethernet. In a Connectrix switch environment, this
means that some number of physical Fibre Channel ports are changed from Fibre Channel to Ethernet.

Fibre Channel Frames come into a Fibre Channel port, and the Fibre Channel data is encapsulated into
Ethernet frames. Then it is switched to an Ethernet outbound port on the switch. Specialized multiprotocol
Connectrix models support this functionality.

Another example is the FCoE, or Fibre Channel over Ethernet protocol stack, which some Connectrix
switches supports.

Copyright © 2017 Dell Inc. Fibre Channel for Connectrix 6


Now let us talk about the different layers in more detail.

The FC-0 level describes the Fibre Channel physical link and connectors. The FC-0 level includes both
copper and fiber optic media and the associated transceivers capable of operating at 1, 2, 4, 8, 10, 16, or
32 Gbps. Standards are continually evolving to support higher and higher speeds.

Fibre Channel SANs are normally deployed in switched fabric topologies. Communications are full-duplex,
meaning you can simultaneously transmit and receive between devices. There are two conductors in each
cable: one for transmission in one direction, and one for transmission in the other direction.

Although Fibre Channel supports copper connectivity for short distances, within a rack for example, most
SAN applications use optical transceivers and cables.

Today, the LC style is by far the most popular Fibre Channel connector. Older Small Form-factor
Pluggable (SFP) transceivers support speeds up to 4 Gbps. SFP+ transceivers typically can negotiate at
three different bit-rates as shown. The latest SFP+ transceivers support speeds up to 32 Gbps. SFP+
transceivers come in versions to support standard and extended distances.

The mini-SFP (mSFP) and mini-LC (mLC) transceiver and cable are a specialty solution offered by
Brocade to squeeze 64 connectors onto a single switch blade. Other than that specific application, you
probably will not see the mini LC connectors used.

Copyright © 2017 Dell Inc. Fibre Channel for Connectrix 7


Distance is a consideration while implementing Inter-switch links (ISL) between Fibre Channel switches.
This is especially important when a fabric spans campus distances. For example, two datacenters a few
miles apart would use longwave laser (LWL) instead of shortwave (SW) SFP+ Transceivers.

SFPs must be paired with the correct type cable. SW transceivers use multimode cable while LWL
transceivers use single-mode cable.

Some variables that affect supportable distance are propagation and dispersion losses, buffer-to-buffer
credit, and optical power.

Using this chart, we can see that for shortwave SFPs and multimode fiber, distances are limited to 500
meters or less.

For longer distances, longwave laser (LW, LWL) or extended longwave laser (ER, ELWL) over single-
mode 9-micron fiber optic cable is required. This solution is least susceptible to modal dispersion, thereby
enabling distances between 10 km and 40 km, depending on the vendor and SFP type used.

For greater distances than shown here, Coarse Wavelength Division Multiplex / Dense Wavelength
Division Multiplex (CWDM/DWDM) may be used between switches. DWDM supports distances up to 3000
km, depending on the application.

Copyright © 2017 Dell Inc. Fibre Channel for Connectrix 8


Each device connected through Fibre Channel will use a port and cable to make the connection. Ports will
come online in a specific mode of operation. The port’s mode of operation defines its port type. Port types
defined in the Fibre Channel standard are shown here. The first three are the most common port types
found in storage area networks.

Node Ports (N_Ports) are used by host and storage devices to connect to SAN switches.

Fabric Ports (F_Ports) are only found on fabric switches and connect to N_Ports.

Expansion Ports (E_Ports) are also only found on fabric switches, and are used to connect to other
E_Ports to form ISLs between switches.

NL_Ports and FL_Ports, which are used in loop environments–thus the L in the port name, are not used
much in SAN environments anymore. Some switches have ports that come up in G_Port or generic mode
when they are not connected to anything.

Vendors may expand on the standard port types with proprietary modes of operation. For example, Cisco
introduced the proprietary TE_Port to extend the E_Port for VSAN capabilities. Brocade has proprietary
EX_Ports that it uses for Fibre Channel routing.

Copyright © 2017 Dell Inc. Fibre Channel for Connectrix 9


Physically, host use Host Bus Adapters, or HBAs to connect to a Connectrix switch in a Fibre Channel
fabric. An HBA usually plugs into a PCI bus slot on the server and is installed with drivers to provide Fibre
Channel connectivity. It is similar to the NIC card that provides connectivity to the Local Area Network.

The HBA will operate in N_Port mode to attach to a Connectrix switch in the SAN fabric. HBAs come in
single port and multi-port versions. HBAs support various Fibre channel speeds up to 32 Gbps.

Ports on the storage array will also operate in N_Port mode when attached to Connectrix switches.

Dell EMC qualifies HBAs and associated software from various vendors to work with different hosts,
operating systems, switches and storage arrays. An HBA should not be used that has not been qualified
by Dell EMC.

See the Dell EMC support matrix for qualified HBAs and host systems.

Copyright © 2017 Dell Inc. Fibre Channel for Connectrix 10


FC-1 defines the transmission protocol including serial encoding and decoding rules, special characters,
and error control.

At less than 10 Gbps, each information byte (8 bits) is encoded into a 10-bit Transmission Character. This
is fairly inefficient as 20% of the bits effectively are overhead)

For interfaces operating at 10 Gbps and above, every 8 bytes (64 bits) is encoded into a 66-bit
transmission character. This results in only 3% overhead, so the actual clock speeds do not have to be as
fast to achieve the desired effective data rates.

The primary rationale for using a transmission code is to improve the transmission characteristics of the
serial stream of bits on the cable. The bit stream must be DC balanced to support the electrical
requirements of the receiving units. The Transmission Characters ensure that enough transitions are
present in the serial bit stream to make clock recovery possible. This is also the first level of error
checking, as most single bit errors will be detected as an invalid character.

The encoding process creates two types of Transmission Characters: Data characters and Special
characters. Certain combinations of Transmission Characters, referred to as Ordered Sets, have special
meaning.

The Ordered Sets are used to identify frame boundaries (for example start of frame characters, SOF and
end of frame characters, EOF), and also to transmit primitive function requests (like R_RDY, which is used
to replenish buffer-to-buffer credit). Ordered sets are also used to maintain proper link transmission
characteristics by sending idle characters during periods of inactivity.

Copyright © 2017 Dell Inc. Fibre Channel for Connectrix 11


The FC-2 level serves as the transport mechanism of Fibre Channel. It includes data framing, frame
sequencing, flow control, and class of service. There are several defined classes of service, but SANs
usually use class 3, which is a connectionless class of service. Class 3 frames are sent without verification
that the frame is received.

Frames contain the information to be transmitted, the address of the source and destination ports, and link
control information. Frames are broadly categorized as data frames and link control frames.

It is the FC-2 layer's responsibility to break the data to be transmitted into frame size chunks and
reassemble the received frames into sequences and exchanges for the upper level protocols.

Copyright © 2017 Dell Inc. Fibre Channel for Connectrix 12


Frames are the basic building blocks of a Fibre Channel connection. The frames contain header
information, data, and Cyclic Redundancy Check (CRC) for error checking. The frame is delineated by
Start of Frame (SOF) and End of Frame (EOF) special characters. All information in Fibre Channel is
passed in frames. The maximum amount of data carried in a frame is 2112 bytes with the total maximum
frame size of 2148 bytes.

The header contains the destination and source addresses, which allow the frame to be routed to the
correct port in the Fibre Channel network.

The Type field interpretation is dependent on whether the frame is a link control or a Fibre Channel data
frame. For example, if the frame is a data frame, a 08 in the Type field indicates SCSI FCP, or Fibre
Channel Protocol information in the Data field.

Notice that the header also keeps track of the sequence number, originator exchange ID (OX_ID) and
receiver exchange ID (RX_ID) so each frame of data can be placed in its proper context.

Copyright © 2017 Dell Inc. Fibre Channel for Connectrix 13


FC-2 also provides flow control for buffer management and to prevent data overruns. Buffer-to-buffer
credit (BB_Credit) is used to control the flow of data across a link. This is especially critical when data is
transmitted over long-distance cables and DWDM. In those cases, extra buffering is usually required.
Credit is negotiated when each link first comes up and reflects the number of receive buffers that are
available to receive Fibre Channel frames. A device may only send a frame when it has credit to do so.
This avoids overrunning the receive buffers and consequently losing data frames.

Every time a frame is sent, credit is decremented by one. Every time a receiver makes a frame buffer
available, it sends an R_RDY primitive to the sender replenishing a credit.

Most data center Fibre Channel links are well under 500 meters. So we normally do not have to worry
about BB_Credit because the default allocation is sufficient. However, on long-distance links, there may
be several frames in-transit on the link, and BB_Credit is used up before R_RDY’s can be sent back.
Performance is impacted while the transmitter waits to receive credit. For this reason, we usually need to
increase the number of buffer-to-buffer credits available on long-distance links.

Copyright © 2017 Dell Inc. Fibre Channel for Connectrix 14


FC-4 is the highest level in the Fibre Channel structure. It defines how the upper layer and application
protocols can be transported over Fibre Channel.

It specifies how upper layer protocols map to the lower Fibre Channel levels.

The purpose of an FC-4 protocol mapping is to make a logical connection between ULP and Fibre
Channel’s transport facilities. It provides a logical connection between two architectures. For SANs
carrying block storage traffic, SCSI is the ULP that is mapped to SCSI-3 Fibre Channel Protocol.

Copyright © 2017 Dell Inc. Fibre Channel for Connectrix 15


Upper layer operations are divided into information units that map into Exchanges, sequences and
frames. The Information Units are the data which is passed between the FC-4 level and the ULP.

As an example of how all of this comes together, let us look at a single SCSI I/O operation. A SCSI Read
command is executed in a three-step operation. This three-step I/O operation is known as a Fibre Channel
exchange. The exchange is divided into three sequences, one for each information unit.

In the first sequence, the SCSI initiator sends the SCSI Read command, which includes the block address
on the LUN and how many blocks to read. Second, the SCSI target sends the requested read data.
Finally, after the data has been sent, the target reports the status of the operation in a SCSI response.

These three sequences are divided into frames. Each frame has the Exchange ID (OX_ID), Sequence ID
(SEQ_ID) and frame number in the frame header, just in case the frames arrive out of order.

A Fibre Channel frame holds approximately 2 Kilobytes, so if the read sequence is reading 100 blocks of
data (around 50 KB), it will take many frames to complete the sequence.

Copyright © 2017 Dell Inc. Fibre Channel for Connectrix 16


There are three basic topologies in Fibre Channel:

The point-to-point topology provides a dedicated full-duplex link between two nodes.

Fibre Channel Arbitrated Loop or FC-AL provides the ability to connect between at least 2, and up to 126
nodes that share a common loop. Practical SAN implementations seldom have more than 8 nodes before
the loop becomes too congested with traffic to add more devices. FC-AL is usually implemented through
an FC-AL hub. In the early days of Fibre Channel these were quite popular in SANs, but now hubs have
been almost entirely replaced by switches, especially for SAN applications.

Fibre Channel Switched Fabric or FC-SW provides a dynamic switched fabric with a theoretical address
space of more than 15 million nodes. Nearly all data centers that require Fibre Channel communication
between more than two devices use the switched fabric topology to build their SAN. It is very scalable,
with some data centers running fabrics with more than 4000 ports. All Connectrix Fibre Channel switches
operate in switched fabric mode.

Copyright © 2017 Dell Inc. Fibre Channel for Connectrix 17


This lesson covers Fibre Channel node addressing and World Wide Names.

Copyright © 2017 Dell Inc. Fibre Channel for Connectrix 18


A Fabric is a physical or virtual space in which all authorized storage nodes may communicate with each
other. It can be created with a single switch or a group of switches connected together.

The primary function of the fabric is to receive data frames from source N_Ports and forward them to the
destination N_Ports. Each frame has a destination address so the fabric knows where to send it. Each
frame also has a source, or return address, so the receiver knows where to send a response.

How do end nodes get their Fibre Channel addresses? We will cover that next.

Copyright © 2017 Dell Inc. Fibre Channel for Connectrix 19


Each N_Port must be assigned a 24-bit address, called a Fibre Channel ID (FCID). This address is also
sometimes referred to as a Port ID (PID). The FCID must be assigned before an N_Port is allowed to
communicate on a Fibre Channel network. With a switched fabric, the switch itself automatically assigns
an address to the N_Port. The switch will assign an FCID that is unique in the fabric. This address is
assigned when the N_Port logs in to the fabric.

The address is specified as six hexadecimal digits and is divided into three fields: Domain_ID, Area_ID,
and Port_ID.

Each switch is assigned a unique Domain_ID between 1 and 239. The switch Domain_ID becomes the
first byte of the address for every N_Port attached to that switch.

The Area_ID and Port_ID bytes are assigned in different ways by different switch vendors. One of the
reasons there are incompatibilities between different switch vendors is because they handle address
assignment differently. This is why you will normally find a datacenter has Fibre Channel switches from
only one vendor.

Copyright © 2017 Dell Inc. Fibre Channel for Connectrix 20


Each switch runs fabric services at the FC-2 level to manage the fabric environment. Each fabric service is
assigned a standard well known address. Three of them are shown here.

The F_Port server login service is assigned well known address FFFFFE. This is the address that an
end node connects to when performing a Fabric Login (FLOGI) operation. Each end node must have an
FCID to operate. Each end node sends a FLOGI frame to address FFFFFE and receives a response that
includes the FCID the node will use for all other Fibre Channel communications.

The node must then log in and register with the fabric Name Service. A Port Login (PLOGI) operation is
used to do this. When a node sends a PLOGI frame to the fabric name server, it uses destination ID
FFFFFC.

The Name Service is used to store information about all devices attached to the fabric. After performing a
PLOGI to the name service, each node registers its identifying information and capabilities. The name
service stores all these entries in a local database on each switch and distributes the information to other
switches in the fabric. Devices can query the name service to find other devices logged in to the fabric.

The Fabric Controller provides state change notifications to all registered nodes in the fabric. A state
change is when a link in the fabric transitions from up to down, or down to up. Hosts require notification
when storage targets have link state changes. A node registers for state change notifications by sending
an SCR (State Change Registration) frame to address FFFFFD.

Copyright © 2017 Dell Inc. Fibre Channel for Connectrix 21


Each Fibre Channel Port has a unique World Wide Name (WWN). Each is a 64-bit address used in Fibre
Channel networks to uniquely identify each element in the network. A WWN name is assigned to each
host bus adapter port, switch port, and storage array port. These WWN assignments are made by the
vendor at the time of manufacture, or through firmware based on a chassis serial number. It is similar to
the MAC address in an Ethernet network.

There are two designations of WWN – World Wide Port Name or (WWPN) and World Wide Node Name
(WWNN). Both are globally unique 64-bit identifiers. The difference lies in where each value is physically
assigned. For example, a server may have dual port HBAs installed. Each port would receive a unique
WWPN, and may share a WWNN.

Copyright © 2017 Dell Inc. Fibre Channel for Connectrix 22


Shown are three examples of World Wide Port Names (WWPN or PWWN) for different products that may
connect to a Fibre Channel SAN. Noticed that each vendor has its own unique way of filling out the bits in
the WWN.

Copyright © 2017 Dell Inc. Fibre Channel for Connectrix 23


This module covered Fibre Channel architectural layers, switched fabric topology, and addressing.

Copyright © 2017 Dell Inc. Fibre Channel for Connectrix 24


This module focuses on zoning and how it is related to the fabric name services. Common fabric
topologies are also discussed.

Copyright © 2017 Dell Inc. Fibre Channel for Connectrix 25


This lesson covers Fibre Channel zoning, the various logins, and name services.

Copyright © 2017 Dell Inc. Fibre Channel for Connectrix 26


Before we talk about how an end device logs in to a Fibre Channel fabric, we must understand fabric
zoning. Zoning is a Fibre Channel fabric function that allows devices attached to the fabric to be logically
separated into groups. Each group, or zone, facilitates communication between devices assigned to the
same zone. Devices are not allowed to communicate with devices in other zones. A zoning database
contains one or more individual zones. The database is distributed to all switches in the fabric. Each zone
will have one initiator and one or more target devices.

When a frame arrives at a switch port the destination address is read and compared to the zone Access
Control List (ACL) for the port. If the destination is allowed by the ACL, the frame is forwarded. Otherwise
the frame is dropped.

A collection of zones that can be activated throughout the fabric is called a Zone Configuration or a Zone
Set.

Copyright © 2017 Dell Inc. Fibre Channel for Connectrix 27


Each device that connects to a Fibre Channel switch goes through a process to log in to the fabric and
register with the fabric name server. Once logged in and registered, each host device queries the name
server to discover storage target devices.

As an example: Initiator A logs into the fabric and queries the name server about logged in devices that it
may communicate with.

The name server checks the logged in devices, and determines which are zoned to talk to Initiator A. The
zoning function controls this process by permitting only ports in the same zone to be discovered. In our
example, Target A is returned to the initiator as an available logged in device.

Zoning prevents unauthorized devices from communicating with other devices. If Initiator A tries to send a
frame to Target B, the frame is dropped at the ingress port of the switch and is not forwarded because
Target B is not in the same zone as Initiator A.

Copyright © 2017 Dell Inc. Fibre Channel for Connectrix 28


There are two ways to define the members of a zone.

One way is by using World Wide Port Names. This type of zoning is sometimes called soft zoning. In
this type of zoning, the administrator controls connectivity by putting the WWNs of devices that are
allowed to communicate into a common Zone. WWNs defined as part of a zone 'see' each other
regardless of the switch port they are plugged into. With this type of zoning, if a host HBA is replaced, the
zone must be modified with the WWN of the replacement HBA. This type of zoning is recommended by
Dell EMC.

The other way to define zone members is by using the Domain_ID and port number. This type of zoning
is often called port-based zoning, or hard zoning. Each port is only allowed to see ports that are in the
same zone. If a cable is moved to a different port, the zone has to be modified before communication
takes place. Some administrators like this type of zoning because if they are changing host and HBAs,
they do not have to change the zoning because of WWN changes. All they need to do is plug the correct
devices into the correct ports.

With both types of zones, zone members may exist in multiple zones. Port zoning is not recommended
because any device regardless of its WWN can be granted access to unauthorized devices simply by
plugging it into a zoned port on the switch.

A variation of these two types of zones is the hybrid zone which defines some members by WWN and
others by their domain and port.

Since WWNs and FCIDs are not very user-friendly, human readable names called aliases may be defined
to represent the zone members.

Copyright © 2017 Dell Inc. Fibre Channel for Connectrix 29


Now that we know something about Fibre Channel addresses, WWNs, zoning, and fabric services, let us
put everything together. We will see how an HBA port on a host logs in to the fabric and discovers target
LUNs on fabric attached storage arrays.

The steps that a Fibre Channel N_Port must go through are listed here. The first step is to bring the link
between the N_Port and the switch F_Port online. This is done by exchanging several primitive sequences
until both ports are synchronized and sending Idle sequences to one another.

Copyright © 2017 Dell Inc. Fibre Channel for Connectrix 30


After the link comes online, the N_Port needs an address. So it sends a Fabric Login (FLOGI) frame to the
well known address FFFFFE of the F_Port Server. The F_Port Server responds with a Link Services
Accept frame which includes the new FCID that is being assigned to the N_Port. In this example, the FCID
will begin with 03, because the HBA is connected to a switch with Domain_ID 3.

Copyright © 2017 Dell Inc. Fibre Channel for Connectrix 31


Once our HBA N_Port has a Fibre Channel address, it can log in to the Fabric Name Server. Each switch
in the fabric has a copy of the name server database, and when new N_Ports are registered, that
information is passed to all of the other switches in the fabric.

Copyright © 2017 Dell Inc. Fibre Channel for Connectrix 32


When logged in to the name server, our HBA port will register important information about itself by
transmitting several Fibre Channel services frames. These frames will specify which Fibre Channel
classes of service are supported by the HBA, the FC-4 protocols supported, and its mode of operation
(N_Port). HBAs from different vendors, and storage device ports, may register additional information such
as WWNs and symbolic node names that include text that describes the manufacturer, and possibly
information like firmware level or serial number.

Copyright © 2017 Dell Inc. Fibre Channel for Connectrix 33


Once the HBA has sent all the frames to register with the name server, it then begins a series of queries.
This step allows the HBA to find out what storage devices are also online and registered with the name
server. Each query is sent with a Fibre Channel services frame and responded to with an Accept frame.

The first query asks the name server for a list of FCIDs (PIDs) that have registered as SCSI FCP devices.
Once it gets this list of FCIDs, the HBA will ask the name server for relevant information about each
registered FCID. For example, it will get port WWNs associated with each FCID.

Now, this is where zoning is important. The name server will only return information about registered
devices that are in the same zone as the HBA port. So if there are ten storage array ports registered in the
fabric, but only one of them is in the same zone as the HBA port, then the HBA will only learn about that
one storage device.

When the HBA is done asking the name server for information, it will log out of the name server.

Copyright © 2017 Dell Inc. Fibre Channel for Connectrix 34


In step 6, the HBA will register for state change notifications. Some HBA vendors may actually perform
this registration earlier in the initialization process.

A state change registration (SCR) frame will be sent to well known address FFFFFD of the Fabric
Controller, and an Accept frame will be returned in response.

The reason an HBA registers for state change notifications is that it wants to be notified when a device
that is in the same zone has a link come online, or a link goes offline. The HBA is responsible for querying
the name server, when it receives a registered state change notification, and check for changes in target
device status.

Copyright © 2017 Dell Inc. Fibre Channel for Connectrix 35


Once the HBA has a list of storage device port IDs, it will log in to those ports. So in our example, the HBA
discovered that FCID 040600 was logged in with the name server, so it sends a Port login Extended Link
Services frame to that address. The storage array port responds with an Accept frame.

Copyright © 2017 Dell Inc. Fibre Channel for Connectrix 36


Once the HBA port has logged in to the storage array port at a Fibre Channel level, it must login at a SCSI
process level. Since the HBA wants to exchange SCSI data over Fibre Channel, it must perform a
Process Login by sending a Process Login (PRLI) Extended Link Services frame to the storage array port.
The storage array port responds with an Accept frame.

Information carried in these frames specifies which port is the SCSI Initiator and which port is the SCSI
target, and whether to use XFER_RDY on read operations–usually XFER_RDY is only used for write
operations.

Copyright © 2017 Dell Inc. Fibre Channel for Connectrix 37


Finally, after all of the previous steps have been completed, the HBA can start sending frames with SCSI
information. Usually, the first SCSI frame sent will be a SCSI INQ command to get information about the
SCSI LUNs on the target device.

Copyright © 2017 Dell Inc. Fibre Channel for Connectrix 38


This lesson covers Fiber Channel Deployment scenarios including dual fabrics, core edge topology, SAN
Routing, and SAN Virtualization.

Copyright © 2017 Dell Inc. Fibre Channel for Connectrix 39


The highest SAN availability comes when fabrics are mirrored.

Each topology we discuss in this course can, and should be mirrored to achieve the highest availability.
Dell EMC recommends mirrored fabrics be identical for easier management and monitoring. However, an
advantage of mirroring is that one side of the mirrored fabric can be brought down for maintenance or
firmware upgrades, or to change switch vendors, while the other side continues production operations.

Copyright © 2017 Dell Inc. Fibre Channel for Connectrix 40


A fabric with a Core-Edge topology is fairly simple to design and implement. There are two variations: two-
tier or three-tier.

In a two-tier Core-Edge topology, all hosts are connected to edge switches and all storage is connected to
the core switch.

With a Core-Edge three-tier topology, there are two edge tiers connected to a central core. All hosts are
connected to one edge, and all storage is connected to the other edge. The core tier would then only be
used for ISLs.

The Edge tier, usually small low-cost departmental switches, offers an inexpensive approach to add more
hosts into the fabric.

The Core or backbone tier usually consists of enterprise directors, which are higher cost, but have higher
availability. For the highest availability, we should have redundant edge switches and core directors, and
redundant ISLs connecting the switches.

Copyright © 2017 Dell Inc. Fibre Channel for Connectrix 41


There are two types of mesh topologies used to build a SAN: full mesh and partial mesh. A full mesh
topology has each switch connected to every other switch. In a partial mesh topology, each switch is
connected to several other switches, but not to every switch. A partial mesh topology is more practical to
build when there are a large number of switches. Full mesh topology provides maximum availability.
However, this is done at the expense of connectivity, which can become prohibitively expensive with an
increasing number of switches; because we start to use many of our ports for ISLs instead of for end
devices.

A compound core-edge topology is a combination of the full mesh and core-edge three-tier topologies.
In this configuration, all host-to-storage traffic must traverse the core or connectivity tier. Hosts and
storage are connected to the edge switches. This type of topology is found in scenarios where several
smaller SAN islands are consolidated into a single large fabric. It can also be found where SAN-NAS
integration requires everything to be plugged together for ease of management and for backups.

Copyright © 2017 Dell Inc. Fibre Channel for Connectrix 42


Routing between fabrics is done by a Fibre Channel switch with SAN routing enabled. Fibre Channel
switches that perform a routing service do not allow the two fabrics to merge. Special vendor-dependent
zone types will allow certain devices in one fabric to communicate with devices in another fabric. The SAN
administrator will configure which devices are allowed to communicate across fabric boundaries.

Copyright © 2017 Dell Inc. Fibre Channel for Connectrix 43


N_Port ID Virtualization (NPIV) is a T11 standard that provides a means to assign multiple FCIDs to a
single N_Port. This feature allows multiple VMs sharing an N_Port to use different FCIDs and allows
access control, zoning, and port security to be implemented at the application level.

You must globally enable NPIV for the switch to allow the NPIV-enabled applications to use multiple
N_Port identifiers. Only NPIV capable HBAs support this feature. As each VM powers up, it creates a
Virtual Port (VPORT) on the HBA. Each VPORT has its own WWNN and unique WWPN.

With NPIV, the first time the physical HBA port logs in to the fabric, it does it in the normal way, with a
FLOGI frame, and receives an FCID from the F_Port server. Subsequent initialization logins by VMs using
the same HBA port will use a Fabric Discovery (FDISC) login frame instead of a FLOGI to get an FCID
assignment.

Copyright © 2017 Dell Inc. Fibre Channel for Connectrix 44


Fibre Channel switches can be virtualized. This means that a switch or chassis may be partitioned into
multiple logical switches, each belonging to a different fabric.

As an example, we have taken sixteen ports from the physical switch on the left, and created three logical
switches in three separate fabrics on the right. Each fabric is logically isolated from the other fabrics. The
logical switches in these fabrics will each have its own Domain_ID, and can use its ports to attach to Fibre
Channel devices or to other physical switches (and their logical switches if they have them).

An administrator creates a logical switch and assigns it physical ports and a fabric or virtual SAN identifier.
Physical ports may only be part of one logical switch at a time.

This feature allows us to take a single physical fabric infrastructure and virtualize it, so that we can
segregate functions, applications, or groups within an organization.

Copyright © 2017 Dell Inc. Fibre Channel for Connectrix 45


This module covered zoning, fabric logins and the fabric name service. It also covered common fabric
topologies and features.

Copyright © 2017 Dell Inc. Fibre Channel for Connectrix 46


This module focuses on deployment considerations for common Connectrix Fibre Channel features.

Copyright © 2017 Dell Inc. Fibre Channel for Connectrix 47


This lesson covers how to define zone members and then create and activate zoning configurations and
zone sets.

Copyright © 2017 Dell Inc. Fibre Channel for Connectrix 48


As mentioned previously, there are two ways to define zone members:
• World Wide Port Names (WWPN or PWWN)
• a Domain_ID and Port Number

World Wide Port Name type of zoning, is shown on the left. The zone is given a name, and WWNs of
devices that we want to communicate are added to the zone.

Domain_ID and Port Number type of zoning is shown on the right. Again the zone is given a name and
the FCIDs, with the domain and port number, are added to the zone.

Copyright © 2017 Dell Inc. Fibre Channel for Connectrix 49


The slide shows the steps for creating zoning for a Fibre Channel fabric. The process of zoning consists of
identifying host and storage pairs, adding them to a zone, adding this new zone to a zone set or zone
configuration, and then activating the zone set or configuration in the fabric where the host and storage
pair reside. Once the zone set is activated, the host and storage pair can communicate with each other.

An active zone set or zone configuration is the collection of zones currently being used by the switched
fabric to manage data traffic.

Single HBA zoning, sometimes called single initiator zoning, consists of a zone with a single HBA port
(initiator) and one or more storage ports (targets). A port can reside in multiple zones. This provides the
ability to map a single storage port to multiple host ports. For example, a VMAX FA port or a VNX SP port
can be mapped to multiple single-HBA zones. This allows multiple hosts to share a single storage port.

Copyright © 2017 Dell Inc. Fibre Channel for Connectrix 50


Dell EMC recommends using single HBA zoning. Other best practices include using a consistent and
detailed naming convention for zone and zone sets that clearly identifies the ports to be zoned, using
WWPN-based zoning, and developing and following robust change control procedures for zoning.

Copyright © 2017 Dell Inc. Fibre Channel for Connectrix 51


This lesson covers the considerations for connecting ISLs, ISL aggregation on Connectrix B-series and
Connectrix MDS-series switches, and virtualization options.

Copyright © 2017 Dell Inc. Fibre Channel for Connectrix 52


Before connecting an ISL between two switches, you need to check the following things:
• First, is the firmware between the switches compatible?
• Does each switch have a unique Domain ID (D_ID)? If the switch trying to join the fabric has a
duplicate domain ID, it may not be allowed to join the merged fabric.
• Which switch has priority and will become the principal switch in the fabric? We do not want the new
switch to replace the existing principal switch, so configure it as never principal. Otherwise, the
existing fabric could be compromised, and it might go down, or cause hosts to lose connections!)
• Are the port speeds set the same? if not, the ISL will segment the fabrics.
• Is zoning the same in each switch? if switches have different zoning information, they do not know
which copy of the zones is correct, so the fabrics will segment.
• Are we aggregating physical ISLs into a single logical ISL? If we are aggregating ISLs, they must be
defined the same in each switch. For example, if one switch defines two ISLs in the logical link, and the
other switch defines three ISLs in the logical link, the logical link will not form and the fabrics will not
merge. There are two ways to aggregate ISLs, depending on which vendor switch we are working with.
They are:
‒ In Connectrix B-series switches, the feature is called ISL Trunking.
‒ In Connectrix MDS-series switches, the feature is called Port Channels.

Note: If the switch and port firmware and configuration is not compatible between switches at each end of
the ISL, fabric segmentation will occur and the fabrics will not merge.

Copyright © 2017 Dell Inc. Fibre Channel for Connectrix 53


Connectrix B-series ISL Trunking optimizes the use of bandwidth by allowing a group of ISL
links to merge into a single logical link, called an ISL Trunk. The Fibre Channel traffic is
distributed dynamically and in order across the links that make up the ISL trunk. Within the
trunk group, multiple physical ISL ports appear as a single port, thus simplifying
management and increasing reliability.

If a physical link within the trunk group fails, other links in the ISL trunk take on the load.
There is no need for fabric state change notifications, or fabric rebuilds, thus increasing
efficiency.

Up to eight ISL links can be aggregated into a single logical ISL. The only requirement is that all of the
links in a trunk connect to ports that belong to the same trunking group.

The figure shown on the right of the slide has ports 0 – 7 forming a trunk group and only the links that
connect to these ports belong to the ISL trunk. A Trunking license is required for ISL trunking, and
must be installed on each switch that participates.

Copyright © 2017 Dell Inc. Fibre Channel for Connectrix 54


Connectrix MDS-series switches can use port-channels for ISLs. Port-channels are an aggregation of
multiple physical interfaces into one logical interface. They provide higher aggregated bandwidth, load
balancing, and link redundancy. Port Channels can be connected to interfaces across switching modules.
Thus, failure of a switching module cannot bring down the port- channel link.

MDS 9000 series switches support port-channels with up to 16 ISLs per port-channel.

Copyright © 2017 Dell Inc. Fibre Channel for Connectrix 55


Connectrix B-Series and MDS-Series switches handle switch virtualization differently.

The switch virtualization feature for Connectrix B-Series switches is called the Virtual Fabrics (VF) feature.
Each virtual fabric has a Fabric ID (FID). Each virtual switch is called a logical switch. Logical switches
with the same FID can connect to form a larger fabric.

Each individual logical switch within a virtual fabric must have a unique domain ID.

Connectrix MDS-Series switches have a similar feature called Virtual SAN (VSAN). A physical switch is
divided into VSANs. Logical switches with the same VSAN ID may connect to form a single fabric.

Similar to Connectrix B-Series switches, each virtual switch within the VSAN must have a unique Domain
ID.

Copyright © 2017 Dell Inc. Fibre Channel for Connectrix 56


This module covered considerations for zoning, fabric expansion and Fibre Channel switch virtualization.

Copyright © 2017 Dell Inc. Fibre Channel for Connectrix 57


This course covered general Fibre Channel protocol topics and implementation considerations for
Connectrix-based Storage Area Networks.
This concludes the training.

Copyright © 2017 Dell Inc. Fibre Channel for Connectrix 58


Copyright © 2017 Dell Inc. Fibre Channel for Connectrix 61

You might also like