Brush Up On Storage Networking Protocol Fundamentals - Part II

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

Brush up on storage networking protocol

fundamentals--Part II
James Long - September 22, 2006

Miss Part I? No problem, it's right here

Mainframe Storage Networking: ESCON and FICON


Bus-and-Tag parallel channel architecture was once the primary storage interface used by IBM
mainframes. As mainframes evolved, the ESCON architecture broadly replaced Bus-and-Tag. ESCON
is now being replaced by the FICON architecture to avail mainframes of the cost/performance
benefits enabled by mainstream adoption of Fibre Channel.

ESCON
ESCON is a proprietary IBM technology introduced in 1990 to overcome the limitations of the Bus-
and-Tag parallel channel architecture. ESCON converters were made available to preserve Bus-an-
-Tag investments by bridging between the two architectures. Today, very little Bus-and-Tag remains
in the market. Whereas Bus-and-Tag employs copper cabling and parallel transmission, the ESCON
architecture employs optical cabling and serial transmission. The ESCON architecture is roughly
equivalent to Layers 1 and 2 of the Open Systems Interconnection (OSI) reference model published
by the International Organization for Standardization. We discuss the OSI reference model in detail
in Chapter 2, "OSI Reference Model Versus Other Network Models."

In addition to transporting channel protocol (SBCCS) frames, ESCON defines a new frame type at
the link level for controlling and maintaining the transmission facilities. ESCON operates as half-
duplex communication in a point-to-point topology and supports the optional use of switches (called
ESCON directors) to create a mesh of point-to-point connections. ESCON is connection-oriented,
meaning that the host channel adapter and CU must exchange link-level frames to reserve resources
for a logical connection before exchanging channel protocol frames. The ESCON director circuit-
switches all physical connections resulting in a 1:1 ratio of active logical connections to active
physical connections. Consequently, each channel adapter supports only one active logical
connection at a time. An ESCON director is limited to 256 physical ports (254 usable). No more than
two ESCON directors may separate a host channel adapter from a CU. (Daisy-chaining switches in a
linear manner is called cascading.)

ESCON originally provided for multi-mode and single-mode host channel adapters. However, the
majority of installations use multi-mode channel adapters. The maximum supported distance per link
is three kilometers (km) using multi-mode fiber (MMF) or 20 km using single-mode fiber (SMF). By
using two ESCON directors and SMF, the distance from host channel adapter to CU theoretically
could be extended up to 60 km. However, 43 km was the maximum distance ever officially supported
between a host and a CU. Support for SMF interfaces in hosts and CUs was terminated years ago, so
SMF is used only between directors today. SMF between directors is called extended distance
facility (XDF). It yields a maximum distance of 26 km from host to CU. ESCON remote channel
extenders also can be used (in place of directors) to extend the channel distance. In practice, the
lack of SMF cable plants during the heyday of ESCON limited most installations to 9 km or less.

ESCON transmissions are encoded via 8b/10b signaling. The ESCON signaling rate is 200 megabits
per second (Mbps), which provides a maximum of 20 megabytes per second (MBps) link-level
throughput at distances up to 8 km. This equates to 17 MBps of data throughput. Throughput
decreases as distance increases beyond 8 km (a condition known as droop). This droop effect results
from lack of buffer capabilities and the chatty nature of SBCCS. FICON reduces droop by
introducing buffers and reducing SBCCS chatter. ESCON is losing market share to FICON and likely
will be deprecated by IBM in the not-too-distant future.

FICON
FICON was introduced in 1998 to overcome the limitations of the ESCON architecture. FICON is the
term given to the pseudo-proprietary IBM SBCCS operating on a standard Fibre Channel
infrastructure. The version of SBCCS used in FICON is less chatty than the version used in ESCON.
The ANSI FC-SB specification series maps the newer version of SBCCS to Fibre Channel. All aspects
of FICON infrastructure are based on ANSI FC standards. FICON operates in two modes: bridged
(also known as FCV mode) and native (also known as FC mode). Some FICON hardware supports
SCSI (instead of SBCCS) on Fibre Channel. This is sometimes called FICON Fibre Channel Protocol
(FCP) mode, but there is nothing about it that warrants use of the term FICON. FICON FCP mode is
just mainstream open-systems storage networking applied to mainframes.

In bridged mode, hosts use FICON channel adapters to connect to ESCON directors. A FICON
bridge adapter is installed in an ESCON director to facilitate communication. A FICON bridge
adapter time-division multiplexes up to eight ESCON signals onto a single FICON signal.
Investments in late-model ESCON directors and CUs are preserved allowing a phased migration
path to FICON over time. Early ESCON directors do not support the FICON bridge adapter. The
ESCON SBCCS is used for storage I/O operations in bridged mode.

In native mode, FICON uses a modified version of the SBCCS and replaces the ESCON transmission
facilities with the Fibre Channel transmission facilities. FICON native mode resembles ESCON in
that it:

● Employs serial transmission on optical cabling


● Is roughly equivalent to Layers 1 and 2 of the OSI reference model
● Is connection-oriented (See Chapter 5, "OSI Physical and Data Link Layers," for a detailed
discussion of connection-oriented communication)
● Transports channel protocol and link-level frames
● Supports the point-to-point topology
● Supports the optional use of switches called FICON directors to create a mesh of point-to-point
connections (FICON directors are actually just Fibre Channel directors that support transport of
SBCCS and management via CUP.)

Unlike ESCON, FICON native mode operates in full-duplex mode. Unlike ESCON directors, Fibre
Channel switches employ packet switching to create connections between hosts and CUs. FICON
native mode takes advantage of the packet-switching nature of Fibre Channel to allow up to 32
simultaneously active logical connections per physical channel adapter. Like ESCON, FICON is
limited to 256 ports per director, but virtual fabrics can extend this scale limitation. FICON native
mode retains the ESCON limit of two directors cascaded between a host channel adapter and a CU,
though some mainframes support only a single intermediate director.

Fibre Channel transmissions are encoded via 8b/10b signaling. Operating at 1.0625 gigabits per
second (Gbps), the maximum supported distance per Fibre Channel link is 500 meters using 50
micron MMF or 10 km using SMF, and the maximum link level throughput is 106.25 MBps.
Operating at 2.125 Gbps, the maximum supported distance per Fibre Channel link is 300 meters
using 50 micron MMF or 10 km using SMF, and the maximum link-level throughput is 212.5 MBps.
By optionally using IBM's mode- conditioning patch (MCP) cable, the maximum distance using 50
micron MMF is extended to 550 meters operating at 1.0625 Gbps. The MCP cable transparently
mates the transmit SMF strand to a MMF strand. Both ends of the link must use an MCP cable. This
provides a migration path from multi-mode adapters to single-mode adapters prior to cable plant
conversion from MMF to SMF. The MCP cable is not currently supported operating at 2.125 Gbps.
By using a SMF cable plant end-to-end and two FICON directors, the maximum distance from host
channel adapter to CU can be extended up to 30 km. Link- level buffering in FICON equipment (not
found in ESCON equipment) enables maximum throughput over long distances. However,
throughput decreases as distance increases unless additional link-level buffering is implemented.
Increasing the end-to-end device level buffering is also required as distance increases. IBM FICON
hosts and CUs currently support sufficient buffering to enable operation at distances up to 100 km
(assuming that intermediate link-level buffering is also sufficient). The use of wavelength division
multiplexing (WDM) equipment or FICON optical repeaters is required to extend the end- to-end
channel to 100 km because no more than two Fibre Channel directors may be cascaded.

The Fibre Channel Protocol for SCSI (FCP) is a mapping for SCSI to be transported on Fibre
Channel. In FICON FCP mode, Linux-based mainframes use SCSI in place of SBCCS for storage I/O
operations. As mentioned previously, the term FICON really has no meaning in the context of FCP
mode. All transmission parameters of FICON FCP mode are comparable to FICON native mode
because both use Fibre Channel infrastructure. FICON FCP mode is not supported on OS/390
mainframes.

File Server Protocol Review: CIFS, NFS, and DAFS


When a client accesses a file on a file server or NAS filer, a file-level protocol is used. The most
popular Windows and UNIX file-level protocols are CIFS and NFS, respectively. Even though the
names of these protocols include file system, these are network protocols used to access file
systems. These protocols define messaging semantics and syntax that enable file system interaction
(such as open, read, and write operations) across a network. Of course, these protocols provide file-
system functionality to applications such as open, read, and write operations. Two more popular file-
level protocols are FTP and HTTP. However, these protocols are limited in functionality when
compared to CIFS and NFS. FTP and HTTP do not support file locking, client notification of state
change, or other advanced features supported by CIFS and NFS. FTP and HTTP can be used only to
transfer files (simple read and write operations). As such, FTP and HTTP are not considered
enabling technologies for modern storage networks. A new file-level protocol known as direct access
file system (DAFS) recently appeared in the market. It promises to improve application performance
while lowering host CPU utilization. DAFS adoption has been slowed by the requirement to modify
application code. To date, DAFS adoption has been led by database application vendors.

CIFS
In Microsoft Windows environments, clients historically requested files from servers via the Server
Message Block (SMB) protocol. In 1984, IBM published the basis for the SMB protocol. Based on
IBMs work, Microsoft and Intel subsequently published the OpenNET File Sharing Protocol. As the
protocol evolved, Intel withdrew from the effort, and the protocol was renamed the SMB File
Sharing Protocol; however, SMB provides more than just file-sharing services. SMB relies upon the
services of the Network Basic Input Output System (NetBIOS) rather than Windows Sockets
(Winsock) services. NetBIOS on IP networks was standardized by the IETF via request for comment
(RFC) 1001 and RFC 1002 in 1987. Those RFCs enabled the use of SMB over IP networks and
greatly expanded the market for SMB as IP networks began to rapidly proliferate in the 1990s. SMB
remained proprietary until 1992, when the X/Open committee (now known as the Open Group)
standardized SMB via the common application environment (CAE) specification (document 209)
enabling interoperability with UNIX computers. Even though SMB is supported on various UNIX and
Linux operating systems via the open-source software package known as Samba, it is used
predominantly by Windows clients to gain access to data on UNIX and Linux servers. Even with the
X/Open standardization effort, SMB can be considered proprietary because Microsoft has continued
developing the protocol independent of the Open Group's efforts. SMB eventually evolved enough
for Microsoft to rename it again. SMBs new name is the CIFS file sharing protocol (commonly called
CIFS).

Microsoft originally published the CIFS specification in 1996. With the release of Windows 2000,
CIFS replaced SMB in Microsoft operating systems. CIFS typically operates on NetBIOS, but CIFS
also can operate directly on TCP. CIFS is proprietary to the extent that Microsoft retains all rights to
the CIFS specification. CIFS is open to the extent that Microsoft has published the specification, and
permits other for-profit companies to implement CIFS without paying royalties to Microsoft. This
royalty-free licensing agreement allows NAS vendors to implement CIFS economically in their own
products. CIFS integration enables NAS devices to serve Windows clients without requiring new
client software. Unfortunately, Microsoft has not extended its royalty-free CIFS license to the open-
source community. To the contrary, open-source implementations are strictly prohibited. This
somewhat negates the status of CIFS as an open protocol.

Note that open should not be confused with standard. Heterogeneous NAS implementations of CIFS,
combined with Microsoft's claims that CIFS is a standard protocol, have led to confusion about the
true status of the CIFS specification. Microsoft submitted CIFS to the IETF in 1997 and again in
1998, but the CIFS specification was never published as an RFC (not even as informational). The
SNIA formed a CIFS documentation working group in 2001 to ensure interoperability among
heterogeneous vendor implementations. However, the working group's charter does not include
standardization efforts. The working group published a CIFS technical reference, which documents
existing CIFS implementations. The SNIA CIFS technical reference serves an equivalent function to
an IETF informational RFC and is not a standard. Even though CIFS is not a de jure standard, it
clearly is a de facto standard by virtue of its ubiquity in the marketplace.

A CIFS server makes a local file system, or some portion of a local file system, available to clients by
sharing it. A client accesses a remote file system by mapping a drive letter to the share or by
browsing to the share. When browsing with the Windows Explorer application, a uniform naming
convention (UNC) address specifies the location of the share. The UNC address includes the name of
the server and the name of the share. CIFS supports file and folder change notification, file and
record locking, read-ahead and write-behind caching, and many other functions.

NOTE: The phrase "Universal Naming Convention" is interchangeable with "Uniform Naming
Convention" according to Microsoft's website.

NFS
Sun Microsystems created NFS in 1984 to allow UNIX operating systems to share files. Sun
immediately made NFS available to the computer industry at large via a royalty-free license. In
1986, Sun introduced PC-NFS to extend NFS functionality to PC operating systems. The IETF first
published the NFS v2 specification in 1989 as an informational RFC (1094). NFS v3 was published
via an informational RFC (1813) in 1995. Both NFS v2 and NFS v3 were widely regarded as
standards even though NFS was not published via a standards track RFC until 2000, when NFS v4
was introduced in RFC 3010. RFC 3530 is the latest specification of NFS v4, which appears to be
gaining momentum in the marketplace. NFS v4 improves upon earlier NFS versions in several
different areas including security, caching, locking, and message communication efficiency. Even
though NFS is available for PC operating systems, it always has been and continues to be most
widely used by UNIX and Linux operating systems.

An NFS server makes a local file system, or some portion of a local file system, available to clients by
exporting it. A client accesses a remote file system by mounting it into the local file system at a
client-specified mount point. All versions of NFS employ the remote- procedure call (RPC) protocol
and a data abstraction mechanism known as external data representation (XDR). Both the RPC
interface and the XDR originally were developed by Sun and later published as standards-track
RFCs.

DAFS
DAFS partially derives from NFS v4. The DAFS protocol was created by a computer industry
consortium known as the DAFS Collaborative and first published in 2001. The DAFS protocol was
submitted to the IETF in 2001 but was never published as an RFC. The DAFS protocol is not meant
to be used in wide-area networks. DAFS is designed to optimize shared file access in low-latency
environments such as computer clusters. To accomplish this goal, the DAFS protocol employs
remote direct memory access (RDMA). RDMA allows an application running on one host to access
memory directly in another host with minimal consumption of operating system resources in either
host. The DAFS protocol can be implemented in user mode or kernel mode, but kernel mode negates
some of the benefits of RDMA.

RDMA is made possible by a class of high-speed, low-latency, high-reliability interconnect


technologies. These technologies are referred to as direct access transports (DAT), and the most
popular are the virtual interface (VI) architecture, the Sockets direct protocol (SDP), iSCSI
extensions for RDMA (iSER), the Datamover architecture for iSCSI (DA), and the InfiniBand (IB)
architecture. RDMA requires modification of applications that were written to use traditional
network file system protocols such as CIFS and NFS. For this reason, the DAFS Collaborative
published a new application programming interface (API) in 2001. The DAFS API simplifies
application modifications by hiding the complexities of the DAFS protocol. The SNIA formed the
DAFS Implementers' Forum in 2001 to facilitate the development of interoperable products based on
the DAFS protocol.

Another computer industry consortium known as the Direct Access Transport (DAT) Collaborative
developed two APIs in 2002. One API is used by kernel mode processes, and the other is used by
user mode processes. These APIs provide a consistent interface to DAT services regardless of the
underlying DAT. The DAFS protocol can use either of these APIs to avoid grappling with DAT-
specific APIs.

Backup Protocols: NDMP and EXTENDED COPY


Of the many data backup mechanisms, two are of particular interest: the XE "NDMP (Network Data
Management Protocol);backup protocols:NDMP" Network Data Management Protocol (NDMP) and
the SCSI-3 EXTENDED COPY command. These are of interest because each is designed specifically
for data backup in network environments, and both are standardized.
NDMP
NDMP is a standard protocol for network-based backup of file servers. There is some confusion
about this, as some people believe NDMP is intended strictly for NAS devices. This is true only to the
extent that NAS devices are, in fact, highly specialized file servers. But there is nothing about NDMP
that limits its use to NAS devices. Network Appliance is the leading vendor in the NAS market.
Because Network Appliance was one of two companies responsible for the creation of NDMP, its
proprietary NAS filer operating system has supported NDMP since before it became a standard. This
has fueled the misconception that NDMP is designed specifically for NAS devices.

The purpose of NDMP is to provide a common interface for backup applications. This allows backup
software vendors to concentrate on their core competencies instead of wasting development
resources on the never-ending task of agent software maintenance. File server and NAS filer
operating systems that implement NDMP can be backed up using third-party software. The third-
party backup software vendor does need to explicitly support the operating systems with custom
agent software. This makes NDMP an important aspect of heterogeneous data backup.

NDMP separates control traffic from data traffic, which allows centralized control. A central console
initiates and controls backup and restore operations by signaling to servers and filers. The source
host then dumps data to a locally attached tape drive or to another NDMP- enabled host with an
attached tape drive. Control traffic flows between the console and the source/destination hosts. Data
traffic flows within a host from disk drive to tape drive or between the source host and destination
host to which the tape drive is attached. For large-scale environments, centralized backup and
restore operations are easier and more cost-effective to plan, implement, and operate than
distributed backup solutions. Figure 7 shows the NDMP control and data traffic flows.

Figure 7. NDMP Communication Model

SCSI-3 EXTENDED COPY


EXTENDED COPY was originally developed by the SNIA as Third Party Copy (TPC or 3PC) and later
standardized by the ANSI T10 subcommittee in revision 2 of the SCSI-3 primary commands (SPC-2)
specification. EXTENDED COPY is meant to further the LAN-free backup model by removing servers
from the data path. As with traditional LAN- free backup solutions, control traffic between the media
server and the host to be backed up traverses the LAN, and data traffic between the media server
and storage devices traverses the SAN. The difference is that EXTENDED COPY allows a SAN switch
or other SAN attached device to manage the movement of data from source storage device to
destination storage device, which removes the media server from the data path. A device that
implements EXTENDED COPY is called a data mover. The most efficient placement of data mover
functionality is in a SAN switch or storage array controller. Figure 8 shows a typical data flow in a
SCSI-3 EXTENDED COPY enabled FC-SAN.

Figure 8. SCSI-3 EXTENDED COPY Data Flow

About the Author


James Long is a storage networking systems engineer for Cisco Systems, Inc. specializing in field
sales organization. James holds numerous technical certifications from Cisco, Microsoft, Novell, the
Storage Networking Industry Association (SNIA), and the Computing Technology Industry
Association (CompTIA).

To contact the author, please email: reviews@ciscopress.com and use Storage Networking Protocol
Fundamentals/post question as the subject line.

Title: Storage Networking Protocol Fundamentals. ISBN: 1-58705-160-5 Author: James Long.
Chapter 1: Overview of Storage Networking. Published by Cisco Press

Reproduced from the book Storage Networking Protocol Fundamentals. Copyright [2006], Cisco
Systems, Inc. Reproduced by permission of Pearson Education, Inc., 800 East 96th Street,
Indianapolis, IN 46240. Written permission from Pearson Education, Inc. is required for all other
uses.

*Visit Cisco Press for a detailed description and to learn how to purchase this title.

You might also like