Professional Documents
Culture Documents
Celerra NSX Architectural Overview
Celerra NSX Architectural Overview
Celerra NSX Architectural Overview
Welcome to Celerra NSX Architectural Overview. The AUDIO portion of this course is supplemental to the material and is not a replacement for the student notes accompanying this course. EMC recommends downloading the Student Resource Guide from the Supporting Materials tab, and reading the notes in their entirety. These materials may not be copied without EMC's written consent. EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. THE INFORMATION IN THIS PUBLICATION IS PROVIDED AS IS. EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. EMC2, EMC, Symmetrix, Celerra, CLARiiON, CLARalert, Connectrix, Dantz, Documentum, HighRoad, Legato, Navisphere, PowerPath, ResourcePak, SnapView/IP, SRDF, TimeFinder, VisualSAN, and where information lives are registered trademarks, and EMC Automated Networked Storage, EMC ControlCenter, EMC Developers Program, EMC OnCourse, EMC Proven, EMC Snap, Access Logix, AutoAdvice, Automated Resource Manager, AutoSwap, AVALONidm, C-Clip, Celerra Replicator, Centera, CentraStar, CLARevent, CopyCross, CopyPoint, DatabaseXtender, Direct Matrix, Direct Matrix Architecture, EDM, E-Lab, Enginuity, FarPoint, FLARE, GeoSpan, InfoMover, MirrorView, NetWin, OnAlert, OpenScale, Powerlink, PowerVolume, RepliCare, SafeLine, SAN Architect, SAN Copy, SAN Manager, SDMS, SnapSure, SnapView, StorageScope, SupportMate, SymmAPI, SymmEnabler, Symmetrix DMX, Universal Data Tone, and VisualSRM are trademarks of EMC Corporation. All other trademarks used herein are the property of their respective owners.
Celerra NSX Architectural Overview - 1
Course Objectives
Upon completion of this course, you will be able to:
y Describe the architecture of the Celerra NSX
y Identify the new hardware components of the NSX y Describe the differences between the NSX and the NS Series y Describe the new software needed for the Celerra NSX family y Describe the NSX backend support
The objectives for this course are shown here. Please take a moment to read them.
The objectives for this module are shown here. Please take a moment to read them.
2 to 4 Blade Enclosures daisychained by redundant Ethernet networks Always ordered in a Gateway Configuration Redundant Management Switches in each enclosure Dual Control Stations CLARiiON and Symmetrix backend arrays
The new NSX system is the most highly redundant model in the Celerra family of products. Compared to the NS Series, it is always ordered in a Gateway configuration and consists of the following: y 4 to 8 X-Blade 60 blades that come in a Blade Enclosure. Each system is required to have a minimum of one blade configured as a standby for high availability y 2 to 4 Blade Enclosures daisy-chained together with dual, redundant Ethernet networks used to manage and monitor the system y 2 Management Switches in each Blade Enclosure which have Ethernet Switches and serial connections y Two new Control Stations (for redundancy) to manage the system y Support for both CLARiiON and Symmetrix arrays for backend storage y the ability to upgrade from both the CNS and NS Celerra families Note that any upgrade will be a box replacement and, depending on how many DMs are being replaced and what the backend storage array is, could be very difficult and time consuming. Dual redundant UPS (Uninterruptible Power Supply) is used to keep data flowing, in case a power outage occurs.
y Two redundant Ethernet Management Networks y No eight-port Ethernet network switch on the CS
The switch functionality has been built into the new CS used on the NSX
The NSX is a new family of Celerra. In designing this system, more redundancy was built in to avoid single points of failure. Some of the key differences between the NSX and the older NS500/700 families are as follows: y There is no longer a serial connection between the CS and each Blade y If a serial connection is needed by Customer Service, one is provided through the Management Module y Like the earlier CNS models, the NSX has two independent Ethernet networks for management of the system y All of the fibre connections in the NSX use SFP type connectors rather than the MIAs used in the earlier NS family y The eight port Ethernet switch that used to reside on the CS has been replaced by redundant Management Modules. These have built-in Ethernet switches and serial connections and are located in each Blade Enclosure.
The new NSX has dual UPS to maintain the Control Stations and the blades, in the event of a power loss. The UPS maintains power to the backends, while the ACTS supplies power from the UPSs to the Control Stations. Because the CS has a single power plug, the ACTS are used to route power from both UPS, in case either fails. The earlier NS families only had power protection for the backend array. The NSX family has many more Field Replaceable Units (FRUs) than earlier models. Rather than having to replace large parts of the system, smaller pieces, such as blade memory modules, can be replaced onsite at the customers location.
Similarities between the NSX and older NS families are shown here. Starting with NAS code 5.4, the same code is used on all models of the Celerra series. All storage arrays in the CLARiiON and Symmetrix families that were supported by the NS500/600/700 are supported for the NSX. Fibre Channel requirements for switches and zoning are the same in all Celerras. All software and hardware features, which were supported by earlier NS families, are supported in the NSX.
The objectives for this module are shown here. Please take a moment to read them.
With the introduction of the NSX family, a new Control Station was also introduced. The Control Station monitors and communicates with the blades through the Ethernet link. It uses the Ethernet link to initiate failover. This communication is done through the internal private Ethernet network between the blades and the system management switches. The Control Station can also monitor the backend storage systems through the Ethernet links for status and to extract information. It cannot control the backend storage systems. When the NSX server contains multiple blades, different failover options can be configured. The NSX series server ships with four, six, or eight blades. A blade can have multiple standby blades configured for it. The Control Station is connected to an external SYMMOD-US1 serial modem through the COM 1 port, and optionally to a service laptop using the COM 2 port, located in the front of the Control Station.
Power Boot Sequence LED Status LED HDD Act LED HDD Fault LED
2006 EMC Corporation. All rights reserved.
The Control Station is a dedicated management, Intel processor-based computer that monitors and sends commands to the blades. The private network connects the two Control Stations (always shipped on NSX systems) to the blades through the system management switch modules. Like previous versions, it provides software installation and upgrade services, and high-availability features such as fault monitoring, fault recovery, fault reporting (CallHome), and remote diagnosing. Two Control Stations can be connected to a public or private network for remote administration. Each Control Station has a serial port that connects to an external modem so that the Control Station can call home to EMC or a service provider if a problem should arise.
eth3 - Public LAN Port COM1 - To serial modem (for Call-Home) eth0 Internal Network (To Mgmt. Switch-A in Enclosure 0) Video Port Gb2 Internal Network (To Mgmt. Switch-B in Enclosure 0) Gb1 IPMI (To eth1 of the other Control Station)
This slide displays the rear view of the Next Generation Control Station. Note the lack of a 25pin quad serial port and spider cable.
The differences between this CS and the one used in other NS500/600/700 families are as follows: y There is no longer a quad serial port on the rear of the CS with a spider cable connecting to each of the Blades. With the development of the dual redundant Management Switches and Ethernet networks, there is no longer a need for this type of connection. y The eight port Ethernet switch has been removed from the CS tray because the switch functionality has been built into the CS itself. y The CS now comes standard with 2GB of main memory. y Dual Control Stations come standard with each Celerra NSX. y If a t2cab is run on the Celerra NSX CS, it will identify which model it is. y The management of the NSX is done through the redundant Management Switches built into each Blade Enclosure.
The objectives for this module are shown here. Please take a moment to read them.
y I/O expansion capable for future releases y Supports up to 16TB of Fibre Channel storage
For the Celerra NSX series, Data Movers will be identified as Blades. The X-Blade 60 is the newest CPU module in the Celerra line. The X-Blade 60 specifications are shown here.
NSX Blade
NSX Blade = CPU Blade + Fibre I/O Module + GbE I/O Module
Blade Motherboard
The NSX X-Blade 60 is made up of three components: the CPU board, the Fibre I/O module, and the GbE I/O module. This slide displays the face of both of the I/O modules. The Backend personality card (Fibre I/O Module) connects the system to the FC switches and the SAN fabric. The Frontend personality card (GbE I/O Module) connects the system to the customers Ethernet network.
The left module, inserted into the Blade Enclosure, is the Fibre module. It has four 2GB Fibre ports which are labeled from left to right: BE0, BE1, AUX0 and AUX1. The AUX ports are used to connect to tape devices for backup. The BE0-1 ports are used to connect the Blade to either a CLARiiON or Symmetrix array for booting and users filesystems. NOTE: All four ports must be connected through a switch. The labels are on a Mylar card under the I/O module. This is due to the limited space on the front of the module and the cooling requirements. It can be pulled out for viewing.
The right module, inserted into a Blade, is the GbE I/O module. It has two Fibre and four copper Ethernet ports. The Fibre ports are labeled fge0 and fge1 from left to right. The copper Ethernet ports are labeled cge0-cge2 on the top row and cge3-cge5 on the bottom row. These ports are used to connect to the customers Ethernet network for sharing and exporting filesystems. NOTE: The labels are on a Mylar card under the I/O module; This is due to the limited space on the front of the module and the cooling requirements. It can be pulled out for viewing.
Fibre connections
Instead of Media Interface Adapters (MIAs) for all Fibre connections, NSX X-Blade 60 uses Small Form Pluggable (SFPs). Small form-factor pluggable (SFP) modules connect optical cables to special SFP sockets on the blades (BE 0, BE 1, AUX 0, AUX 1, fge0 e fge1).
Blade A
Behind Ethernet and Fibre Boards
The NSX Blade Enclosure is made up of 2 X-Blade 60 Blades, 2 Management Switches, and 2 power supplies. The module and the power supply on the right side are labeled A, and the two on the left side are labeled B. The top blade is always the higher number server, so if this was the first enclosure, the top blade would be server_3 and the bottom blade would be server_2.
The NSX Blade Enclosure is 4U (7 inches) high. Besides the previously standard high-availability features explained (2 redundant X-blade 60s, 2 system management switch modules and 2 redundant power supplies), the enclosure is also supplied with four system cooling modules (four blowers total for N+1 operation). The blower modules draw ambient room air through the chassiss front bezel and force it out through the back of the enclosure. The 2 power supplies have separate 12 volt output regulators, one for each blade. Also, an adjustable 9-volt/12-volt line supplies energy to the blowers, allowing them to speed up if one of them fails.
Components Overview
Upon completion of this module, you will be able to: y Identify hardware components y Understand new software components and changes
The objectives for this module are shown here. Please take a moment to read them.
y Two Control Stations y Two uninterruptible power supplies (UPS) y Two automatic transfer switches y Two Management Switches per Enclosure
2006 EMC Corporation. All rights reserved. Celerra NSX Architectural Overview - 22
The components are shown here. The Celerra NSX series servers connect to either network attached storage (NAS) or to a combination of storage area network (SAN) and NAS. The NSX gateway server cabinet has four, six, or eight blades (four blade enclosures each containing two blades). With upgrade kits, you can convert a four-blade model to a six- or eight-blade model. Or you can upgrade a six-blade model to an eight-blade model. The model NSX system has two Control Stations. The second Control Station provides high availability. The Celerra NSX ships pre-configured and fully cabled from the factory. It is shipped installed in an EMC 40U rackmount cabinet.
Management Switch
y Two Management Switches per Enclosure for redundancy y Each Switch consists of:
5 port Ethernet switch (Broadcom 5325M) Motorola Coldfire Micro-controller Downloadable firmware (upgraded during NAS install/upgrade) Connection to I2C buses Interconnection to peer module through enclosure backplane LED indicators for Enclosure ID USB serial ports for Blade console and debug access
With the introduction of the NSX family comes a new Management Switch. These switches replace the 8-port switches that were mounted on the rear of the CS tray in older NS families. The features of each module are shown here. These switches also provide peer-power control and box monitoring, which was previously provided by the RS4XX bus in the CNS cabinet and the serial cables in the older NS Celerras.
Management Switch
Port 0 (Uplink)
A B
Port 3 (Downlink)
Fault LED
Power LED
Power LED
2006 EMC Corporation. All rights reserved. Celerra NSX Architectural Overview - 24
This is a picture of the new Management Switch. The switches on the right side of the enclosure, for the primary management network, are labeled A. Those on the left side, for the secondary network, are labeled B.
Primary Network
Secondary Network
This diagram shows the Ethernet connections for an NSX with 4 Enclosures and no UPS. Facing the rear of the rack, the primary Ethernet management network is on the right and the secondary is on the left.
This shows the NSX Celerra with the UPS installed. The Ethernet connection on the UPS is plugged into port 4 on the Management Switch of the second Blade Enclosure in the chain. UPS 0 is connected to the Primary network and UPS 1 is connected to the Secondary network.
If the UPS has been installed, the power strips are not used. The main power cords of the UPS will be plugged into separate power sources for redundancy. All power supplies on the right side will be plugged into UPS 1 and all power supplies from the left side will be cabled into UPS 0. The Control Stations will get their power from ACTS (AC Transfer Switches) 0 and 1. Because the Control Stations only have one power input, the ACTS are fed from both of the UPSs and then supply the CSs with power. In this way, if either of the power circuits or UPSs fail, the other one will support the Control Stations. Unlike in older NS Celerras, the NSX family will continue to serve data to the customers network when power is lost.
This illustration displays the power distribution schematic for the NSX with the UPS, and part numbers for the UPS assemblies. Please note that this view is from the front of the system so Power Supply A is on the left and B is on the right.
The 4 blower modules located in front of each blade provides cooling for the blades and the power supplies from front to rear. N+1 Cooling means that one fan can go down and the system will continue to operate. Since each of the fan packs has 2 fans, removing a fan pan pack will cause a graceful shut down if not replaced within the 2 minute time period. The blowers operate at a low speed during normal operation. If a blower fails, the remaining blowers speed up to maintain airflow. Latches on the blower module make it easy to replace without tools. Each blower module connects directly to the midplane and is powered by the enclosures power supplies. Latches on the blower module make it easy to replace without tools. Each blower module has a status LED indicator.
With the release of the NSX, new pieces of software had to be developed to manage the new system. The first is new firmware for the Management Switches. This firmware takes care of the items shown here. Another new piece of software is the enclosure-ID utility and the T2net library. Information about these are displayed here.
y NAS Installation
Setting up Enclosure IDs during install DHCP server always up Optional - Creation of system LUNs can be automated Optional - Automated zoning and Storage Group configuration Faster NAS installation
With the introduction of the NSX family and NAS v5.4, changes have been made in system monitoring, NAS installation, and how the system is configured. One new concept is the Enclosure ID. In older systems, the ID was based on the positioning of the serial cables at installation time and then stored in NVRam on the enclosure. In the NSX system, a new command is run by the setup script which will probe the management Ethernet network and assign Enclosure IDs based on the order the enclosure is on the Ethernet network chain. This ID is then displayed by the LEDs on the Management Switch in that enclosure. Changes that have occurred in the NAS installation are shown here.
The objectives for this module are shown here. Please take a moment to read them.
y One NSX can connect to up to 4 storage arrays y Up to 4 NSX can connect to one storage array y Small Form Pluggable (SFP) instead of MIAs
2006 EMC Corporation. All rights reserved. Celerra NSX Architectural Overview - 33
The Celerra NSX connects to Symmetrix or CLARiiON arrays. A mixed CLARiiON and Symmetrix backend is also supported. The NSX is always configured as a Fabric-connected gateway system. It is cabled to a Fibre Channel switch using fibre-optic cables and small form pluggable (SFP) optical modules, rather than the MIAs used in the earlier NS family. You can use two switches for high-availability. One NSX can attach to up to four storage arrays. EMC best practices says that if attaching to both Symmetrix and CLARiiON, place the Celerra Control Volumes on the Symmetrix. Up to four NSX can be attached to a singe storage array. EMC recommends a CLARiiON maximum of two.
BB
BB
AA
AA
AB
AB
BE-1 BE-0
The slide shows the fibre layout for a 2 Blade through a single switch configuration. Note that the two connections to the SYMM are to different Directors and are on ports on different busses for maximum load balancing.
BB
BB
AA
AA
Switch B Blade 3
AB
AB
BE-1 BE-0
In a two Blade dual switch configuration, we are still using the same FA configuration, but all the Blades BE-0 ports are going to one switch and the BE-1 ports are going to the other switch.
Course Summary
Key points covered in this course: y The new features of the Celerra NSX y The differences between the Celerra NSX and the NS Series y Description of the physical hardware of the Celerra NSX y Discussion about the new software needed for the Celerra NSX family y Description of the NSX backend support
These are the key points covered in this training. Please take a moment to review them. This concludes the training. In order to receive credit for this course, please proceed to the Course Completion slide to take the assessment.