Download as pdf or txt
Download as pdf or txt
You are on page 1of 133

Architecture and hardware overview

© Copyright IBM Corporation 2011


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Unit objectives
After completing this unit, you should be able to:
• Discuss the hardware and architecture of the DS8000
• Use virtualization terminology describing the configuration of
the DS8000 subsystem
• Describe the physical hardware components and resources
• Describe the models and features provided by each model
• Describe the types of disk arrays that can be configured for
a DS8000 subsystem
• Describe the differences between the DS8100/DS8300 and
the DS8700
• Describe the differences between the DS8700 and the
DS8800

© Copyright IBM Corporation 2011


Topic 1: DS8000 highlights

© Copyright IBM Corporation 2011


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
DS8000 overview (1 of 3)
• Processor family: POWER5+ (DS8000 Turbo)
– DS8100 Model 931
– DS8300 Model 932, and Model 9B2 for LPAR
• Processor family: POWER6 (DS8700)
– DS8700 Model 941 and Model 94E (the 941 can be either a 2-way or
a 4-way)
• Processor family: POWER6+ (DS8800)
– DS8800 Model 951 and Model 95E (the 951 can be either a 2-way or
a 4-way)
• Significant extensions to enable scalability
– 65,000 logical volumes (FB, CKD, or mixed)
– Expanded volume size, dynamic volume creation or deletion

© Copyright IBM Corporation 2011


DS8000 enterprise disk system family

2004 2006 2009 2010

POWER5 POWER5+ POWER6 POWER6+

DS8000 DS8000 DS8700 DS8800


DS8000 DS8000 DS8700 DS8800
Turbo
Turbo

Binary compatibility

© Copyright IBM Corporation 2011


DS8000 overview (2 of 3)

• I/O adapters
– DS8100, DS8300, and DS8700:
• FCP/FICON host adapter (4 ports, 2 or 4 Gbps)
– On DS8700 4 Gbps only
• ESCON host adapter (2 ports, 18 MBps)
– On DS8100 and DS8300 only
• FC-AL device adapter (4 ports, 2 Gbps)

– DS8800:
• FCP/FICON host adapter (4 or 8 ports, 4 or 8 Gbps)
• FC-AL device adapter (4 ports, 8 Gbps)

© Copyright IBM Corporation 2011


DS8000 overview (3 of 3)

• Disk drives
– DS8100 and DS8300: FC-AL disks
• 73 GB and 146 GB (SSD disks)
• 146 GB, 300 GB, and 450 GB (FC disk drive at 15K RPM)
• 1 TB (SATA disk drive at 7200 RPM)

– DS8700: FC-AL disks


• 73 GB, 146 GB, and 600 GB (SSD disks)
• 146 GB, 300 GB, 450 GB, and 600 GB (FC disk drive at 15K RPM)
• 2 TB (SATA disk drive at 7200 RPM)

– DS8800: SAS-2 6 Gbps disks


• 300 GB (SSD disks)
• 146 GB, 450 GB, and 600 GB (15K RPM)
© Copyright IBM Corporation 2011
DS8000 series models 2107
• DS8000 models feature:
– High performance
– High-capacity series of disk storage
– Design supporting continuous operations
• Redundancy
• Hot replacement/updates
– IBM POWER server technology
• Integrated with the IBM Virtualization Engine technology

• DS8000 models consist of:


– Storage unit
– One or two (recommended) Management Consoles (MC)

• Graphical user interface (GUI) or command line interface (CLI) allows:


– Performing logical configurations and Copy Services management functions

• For high availability, hardware components are redundant

© Copyright IBM Corporation 2011


DS8000 code releases

2004 4-2010
•255 LCUs supported •Easy tier
•RAID5/RAID10 7-2009 •Thin Provisioning
10-2006 10-2008
•RMC/zGM/PTC/PAV 2-2008 •Thin Provisioning •Quick Init
•HyperPAV •zHPF
•64K logical volumes •SSPC support (upgr) •Quick init •600 GB 15K
•HMC CIM agent •zGM
•2 GB FCP/FICON •DS8000 M/T •zHPF multi-track •2 TB SATA
•Third and fourth incremental
•73/146/300 GB DDs intermix support •Multi-GM
expansion frame recovery •zHPF multi-trk support

1.0 2.0 2.4 3.0 3.1 4.0 4.1 4.2 4.3 5.0 5.1 6.0

2006 2007 5-2008 2-2009 10-2010


10-2009
•SSPC support (new) •Extended
•Turbo models •Solid state drives •DS8700 •DS8800
•Storage pool address
•500 GB FATA •1 TB SATA
•4 GB FCP/FICON striping volumes •Intelligent write
•FC space efficient •Variable LPAR
•242x machine types cache
•Synergy Items •Dynamic volume •IP v6 •Full disk encrypt
expansion
•Remote pair FC/IC
writes (4.2.5)

© Copyright IBM Corporation 2011


DS8000: R4.0 code release
New features increase DS8000 flexibility and data protection:
• Variable LPAR
– Provides the ability for dual shark images
• Where one image has more processor and cache than the other for increased flexibility
• RAID 6 (dual parity)
– Allows for additional fault tolerance by using a second independent distributed
parity scheme
• New disk
– 450 GB 15,000 rpm disks supported
• IPv6 support
– The DS8000 has been certified as meeting the requirements of the IPv6 Read
Logo program
• Indicating its implementation of IPv6 mandatory core protocols
• Extended address volume
– Extends the addressing capability of System z environments
– Volumes can scale up to approximately 223 GB (262,668 cylinders)
• This capability can help relieve address constraints to support large storage capacity
needs
– These volumes are going to be supported by z/OS 1.10 or later versions

© Copyright IBM Corporation 2011


Release 4.2 additional features
• 1 TB 7200 rpm serial ATA (SATA) drive sets
– RAID-6 and RAID-10 only
– SATA drives will not share sparing capability with non-SATA drives
– Intelligent write caching
• Remote pair FlashCopy
– Allows a FlashCopy relationship where the FlashCopy target device is a Metro
Mirror secondary device
• Full disk encryption drive sets
– Uses TKLM for key management
– Encrypted data at rest
– Protects sensitive data when drives leave the data center
– Simplifies retirement or re-purposing of older systems easily and economically
• 73 GB and 146 GB solid state drives (SSDs)
– Increased performance for transactional applications
• VMware Site Recovery Manager support
• Network time protocol (NTP)
• LDAP support
© Copyright IBM Corporation 2011
DS8000: R4.3 additional features
• Announcing new features codes for 242x Hardware Machine
Types
– Thin Provisioning

• Announcing new functions for all M/Ts (no new feature codes):
– Quick init (lower case is intentional, not IBM reserved naming)
– High Performance FICON for System z (zHPF) multi-track support

© Copyright IBM Corporation 2011


DS8000: R5.0 code release
• This is a new microcode has been released to support the new
DS8700.
– Code R5.0 is supported only on DS8700 (and not on DS8100 or
DS8300)

• Other new features:


– Enhancements to disk encryption key management with support for:
• Encryption deadlock recovery key
• Dual platform key server support
– Value-based licensing

13
© Copyright IBM Corporation 2011
DS8000: R5.1 additional features
• Announcing new features codes for 2107 HW M/Ts: No announcement with this
release
• Announcing new features codes for 242x HW M/Ts Models 941 and 94E: RFA
52712
– New 600 GB 15k rpm Fibre Channel drives
– New 2 TB 7.2k rpm SATA drives
– 8 drive options for SSD
• 73 GB SSD half drive set
• 146 GB SSD half drive set
– Initial system capacity
– Release 5.1 Bundle Family
– Thin Provisioning (Model 941)
• No Copy Services support
– Easy Tier
• Announcing new functions for all Model 941 M/Ts (no new feature codes):
– Multiple GM sessions - RPQ
– Extended Distance High Performance FICON
– Active volume delete protection
– Encryption: Disable recovery key
14 09/01/2009
© Copyright IBM Corporation 2011
DS8000: R6.0 code release
• This is a new microcode has been released to support the new DS8800.
– Code R6.0 is supported only on DS8800 (and not on DS8100, DS8300 or
DS8700).

• Code R6.0 has the following limitation:


– No more 16 GB processor memory
– Multiple Global Mirror sessions is not yet supported
– z/HPF extended distance capability is not supported
– z/OS distributed data backup is not supported
– IBM Disk full page protection is not supported
– Easy Tier is not yet supported
– Quick initialization and Thin Provisioning is not yet supported
– Remote Pair FlashCopy is not yet supported
– SSD drive sets are not supported in RAID-6 or RAID-10 configurations

• Most of these limitation should be removed in the next code release R6.1.
15 09/01/2009
© Copyright IBM Corporation 2011
DS8000: Supported operating systems for servers
• The DS8000 supports more than 90 platforms, including:
– Open Systems
Check the DS8000 series interoperability matrix for complete
• Fujitsu Primepower and updated Information on this subject.
• Hewlett-Packard (HP-UX)
• Hewlett-Packard AlphaServer (OpenVMS and Tru64 UNIX)
• IBM System i (AIX, OS/400, and i5/OS, and Linux)
• IBM System p (i5/OS, and Linux)
• IBM System p RS/6000, and Cluster 1600 (AIX)
• Linux (x86, x64, EM64, AMD64, and iA64)
• Intel Servers (NetWare)
• AMD and Intel Servers (VMware)
• Windows (x86, x64, EM64, AMD64, iA64)
• Apple Macintosh (OSX)
• SGI Origin Servers (IRIX)
• Sun (Solaris)
– IBM System z
• z/OS, z/VM, VSE/ESA, and z/VSE
• Transaction Processing Facility (TPF)
• Linux
© Copyright IBM Corporation 2011
DS8000: Management interfaces (1 of 2)
• IBM System Storage DS Storage Manager GUI (DS SM: Web-based
GUI)
– Used to perform logical configurations and Copy Services management functions
– Installed through GUI (graphical mode) or unattended (silent mode), and offers:
• Real-time configuration (online)
– Logical configuration and Copy Services for a network-attached storage unit

• DS command line interface (DSCLI: script-based)


– Open hosts invoke and manage FlashCopy, Metro and Global Mirror functions
• Handle batch processes and scripts
• Check storage unit configuration and perform specific application functions
• For example:
– Check and verify storage unit configuration
– Check current Copy Services configuration used by storage unit
– Create new logical storage and Copy Services configuration settings
– Modify or delete logical storage and Copy Services configuration settings

© Copyright IBM Corporation 2011


DS8000: Management interfaces (2 of 2)
• DS open application programming interface (API)
– Non-proprietary storage management client application supporting:
• Routine LUN management activities (creation, mapping, masking)
• Creation or deletion of RAID 5 and RAID 10 volume spaces
• Copy Services functions: FlashCopy, PPRC
– Helps to integrate configuration management support into existing storage
resource management (SRM) applications
– Enables automation of configuration management through customer-written
applications
– Complements the use of Web-based DS-SM and script-based DSCLI
– Implemented through IBM System Storage Common Information Model (CIM)
agent
• Middleware application providing CIM-compliant interface
– Uses CIM technology to manage proprietary devices as open system devices
through storage management applications
– Allows these applications to communicate with a storage unit
– Used by TPC for disk
© Copyright IBM Corporation 2011
Topic 2: Hardware Maintenance Console

© Copyright IBM Corporation 2011


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
DS8000 management console overview
• System Storage Management Console: MC
– Other possible names: HMC or DS HMC
– On DS6000: SMC

• Storage Management Console is the focal point for configuration,


Copy Services management, and maintenance activities
– Dedicated workstation physically located (installed) inside your DS8100 and
DS8300 and can automatically monitor the state of your system, notifying you and
IBM when service is required.
– The Management Console can also be connected to your network to enable
centralized management of your system using the IBM System storage DS
Command Line Interface or storage management software that uses the IBM
System storage DS Open API.
– An external Management Console is available as a optional feature and can be
used as a redundant management console for environments with high-availability
requirements.

• Internal Management Console feature code: 1100


• External Management Console feature code: 1110
© Copyright IBM Corporation 2011
DS HMC software components and
communication
• Components of the DS HMC
– The DS HMC includes an application that runs within a WebSphere environment
on the Linux-based DS8000 management console
– Consists of two servers
• DS Storage Management server
– Is the logical server that runs in a WebSphere environment on the DS HMC
– Communicates with the outside world to perform DS8000-specific tasks
• DS NW IFS (Network Interface Server)
– Is the logical server that also run in the WebSphere environment on the DS HMC
– Communicates with the DS Storage Management server, and also interacts with the two
controllers of the DS8000

• Logical flow of communication


– The DS8000 provides several management interfaces:
• DS Storage Manager Graphical User Interface (GUI)
• DS Command-Line Interface (DS CLI)
• DS Open Application Programming Interface (DS Open API)
• Web-based user interface (Web GUI), specifically for use by support personnel

© Copyright IBM Corporation 2011


DS HMC: Logical flow of communication

© Copyright IBM Corporation 2011


DS8000 DS HMC (1 of 2)
• Focal point for:
– Configuration, Copy Services, maintenance
• Dedicated workstation installed inside DS8000
– Same as eServer POWER HMC
– More generally known as Storage System Management Console (MC)
– Known as SMC in DS6000
– Dedicated workstation physically installed inside your system
– Automatically monitors the state of system
– Notifies user and IBM when service required (call home)
– Can also be connected to network
• Enables centralized management through GUI, CLI, or open API
• External management console (optional, feature code 1100)
– For redundancy with high availability

© Copyright IBM Corporation 2011


DS8000 DS HMC (2 of 2)
• Provides the following:
– Local service
• Interface for local service personnel
– Remote service
• Call home and call back
– Storage facility configuration
• LPAR management (HMC)
• Supports logical storage configuration through preinstalled System
Storage
DS Storage Manager in online mode only
– Network Interface Server for logical configuration and invocation of
advanced Copy Services functions
– Connection to storage facility (DS8000) through redundant private
Ethernet networks only
• Service appliance (closed system)
© Copyright IBM Corporation 2011
DS8000 S-HMC and POWER HMC

Ethernet HMC
AIX AIX Unassigned
Resources

Partition 1 Partition 2
Status
command/response
POWER5 Hypervisor virtual consoles

Service
Non-Volatile RAM

Processors processor POWER HMC features:


LPAR • Logical partition configuration
Mem regions Allocation Perm Temp • Dynamic logical partitioning
I/O slots Tables • Capacity and resource management
• System status
• HMC management
• Service functions (microcode update,
and so on)
• Remote HMC interface

© Copyright IBM Corporation 2011


DS HMC and a pair of ethernet switches
• A DS8000 comes with
– An internal DS HMC (Feature code 1100) in the base frame
– A pair of Ethernet switches installed and cabled to
• The processor complex or external HMC, or both

• The DS HMC is a focal point with multiple functions, including:


– Storage configuration
– LPAR management
– Advanced Copy Services management
– Interface for local service personnel
– Remote service and support and Call Home

• The DS HMC has two built-in Ethernet ports


– One dual-port Ethernet PCI adapter
– One PCI modem for asynchronous Call Home support

© Copyright IBM Corporation 2011


DS HMC: Network configuration
• S-HMC network consists of:
– Redundant private Ethernet networks for connection to the Storage Facility(ies)
– Customer network configured to allow access from the S-HMC to IBM through a
secure Virtual Private Network (VPN)

• Call home to IBM Services is possible through dial-up (PCI Modem in


the S-HMC) or Internet connection VPNs

• Dial-up or Internet connection VPNs are also available for IBM service
to provide remote service and support

• Recommended configuration is to connect S-HMC to customer’s public


network for support
– Support will use WebSM GUI for all service actions
– Downloading of problem determination data favors the use of a high-speed
network

• Network connectivity and remote support is managed by the S-HMC


© Copyright IBM Corporation 2011
Topic 3: Remote support

© Copyright IBM Corporation 2011


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Remote support features
• Call Home for Service
– The capability of the DS HMC to contact IBM support services to report a service
event
– The DS HMC provides
• Machine reported product data (MRPD) information to IBM by way of the Call Home
facility
– The MRPD information includes:
• Installed hardware, configurations, and features
– Call Home Service can only be initiated by the DS HMC

• Remote Services
– IBM personnel located outside of the client facility can log in to the DS HMC
• To provide service and support
– The methods available for IBM to connect to the DS HMC
• Are configured by the IBM Support Service Representative (SSR) at the direction of
the client
– And can include dial-up only access or access through a high-speed Internet connection

© Copyright IBM Corporation 2011


Remote support connections
• Dial-up Connection
– A low-speed asynchronous modem connection to a telephone line
• This connection typically favors transmitting small amounts of data
– When configuring for a dial-up connection, the following information is available:
• Which dialing mode to use: either tone or pulse
• Whether a dialing prefix is required when dialing an outside line

• Secure High-Speed Connection


– This connection is through a high-speed Ethernet connection
• That can be configured through a secure VPN Internet connection to ensure
authentication and data encryption
– You can configure a remote connection to meet the following client requirements:
• Allow call on error (machine-detected)
• Allow connection for a few days (client-initiated)
• Allow remote error investigation (service-initiated)

• A graphical interface (WebSM) has chosen


• For servicing the storage facility and for the problem determination activity logs, error
logs, and diagnostic dumps that might be required for effective problem resolution

© Copyright IBM Corporation 2011


Remote support option

© Copyright IBM Corporation 2011


Remote support connection: Management
• Establishing a remote support connection
dscli>
dscli> setvpn
setvpn –vpnaddr
–vpnaddr smc1
smc1 -action
-action connect
connect
Date/Time:
Date/Time: April
April 20,
20, 2009
2009 11:39:00
11:39:00 CEST
CEST IBM
IBM DSCLI
DSCLI Version:
Version: 5.4.1.81
5.4.1.81
CMUC00232I
CMUC00232I setvpn: Secure connection is started successfully through
setvpn: Secure connection is started successfully through the
the network.
network.

• Terminating a remote support connection


dscli>
dscli> setvpn
setvpn –vpnaddr
–vpnaddr –action
–action disconnect
disconnect
Date/Time:
Date/Time: April 20, 2009 15:32:40
April 20, 2009 15:32:40 CEST
CEST IBM
IBM DSCLI
DSCLI Version:
Version: 5.4.1.81
5.4.1.81
CMUC00267I
CMUC00267I setvpn:
setvpn: The
The secure
secure connection
connection has
has ended
ended successfully.
successfully.

• Support authorization using remote support


Authorization level Allowed actions
Remote Establish a VPN session back to IBM.
Work with the standard support functions using the WebSM GUI
PE Establish a VPN session back to IBM.
Work with advanced support functions using the WebSM GUI
Developer Allowed root access, using ssh, but only if a PE user is logged in
using WebSM GUI
© Copyright IBM Corporation 2011
VPN establish process flow
1. Request connection using ASCII Terminal session

2. VPN tunnel established back from HMC

3. Log on to HMC using WebSM

© Copyright IBM Corporation 2011


Topic 4: Hardware components –
DS8100/8300

© Copyright IBM Corporation 2011


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Base frame
Standard 19” rack
mounting space

Dense HDD packaging


16 drives per pack

Dual FC-AL loop switches


Point-to-point isolation
Two simultaneous
operations per loop
Storage-hardware
Redundant maintenance console
power
Processor complex
IBM eServer p5 570
Dual 2-way or dual 4-way

4 I/O enclosure bays


Each bay supports
four host adapters and
2 device adapters
BBU: Host adapter
Battery
Four FCP/FICON ports
Backup
or two ESCON ports
units
Front Device adapter Rear
Four FC-AL ports

© Copyright IBM Corporation 2011


Expansion frame

For 9x2 base frame model

© Copyright IBM Corporation 2011


Frames

© Copyright IBM Corporation 2011


DS 8000 series architecture

© Copyright IBM Corporation 2011


Rack operator window

© Copyright IBM Corporation 2011


Topic 5: Processor complex –
DS8100/8300

© Copyright IBM Corporation 2011


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
DS8000 processor complex (1 of 3)

CEC enclosures in the Model 921/931 each have one processor card (2-way)
CEC enclosures in the Model 922/932 and 9A2/9B2 each have two processor
cards (4-way)

CEC: Computer Electronic Complex


CEC enclosures contain components such as the processor cards, cache memory, and
CEC hard drives.
© Copyright IBM Corporation 2011
DS8000 processor complex (2 of 3)
• Complex is comprised of IBM eServer System p POWER5 servers (921, 922, and
9A2)
– 2-way 1.5 GHz (3X on ESS 800)
– 4-way 1.9GHz (6X on ESS 800)

• New DS8000 Turbo (931, 932, and 9B2) are using POWER5+ processors
– 15 % performance improvement.
– 2.2 GHz for POWER5+ 2- and 4-way

• The POWER5 processor supports logical partitioning


– The p5 hardware and Hypervisor manage the real to virtual memory mapping to provide robust
isolation between LPARs.
– IBM has been doing LPARs for 20 years in mainframes and 3 years in System p.

• LPARs are split 50-50 (by default), so:


– A 4-way has two processors to one LPAR and two processors to the other LPAR.
– LPARs only possible in the 4-way P5s (RIO-G cannot be shared in 2-way).

• Cache memory ranges from 16 GB to 256 GB

• Persistent memory ranges from 1 GB to 8 GB: dependent on cache size


– Battery backed for backup to internal disk (4 GB per server)
© Copyright IBM Corporation 2011
DS8000 processor complex (3 of 3)

(Persistent memory)

© Copyright IBM Corporation 2011


DS8000 persistent memory
• The 2107 does not use NVS cards, NVS batteries, or NVS battery
chargers

• Data that would have been stored in the 2105 NVS cards resides in the
2107 CEC cache memory
– A part of the system cache is configured to function as NVS storage

• In case of power failure, if the 2107 has pinned data in cache, it is


written to an extra set of two disk drives located in each of the CEC
enclosures

• Two disk drives total in each CEC:


– For LIC (LVM Mirrored AIX 5.3 + DS8000 code)
– For pinned data and other CEC functions

• During the recovery process, the pinned data can be restored from the
extra set of CEC disk drives just as it would have been from the NVS
cards on the ESS 800

© Copyright IBM Corporation 2011


Topic 6: I/O enclosures – DS8100/8300

© Copyright IBM Corporation 2011


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
DS8000 I/O enclosure

Processor complex

Processor complex

© Copyright IBM Corporation 2011


RIO-G and I/O enclosures
• Also called I/O drawers

• Contain six PCI-X slots: 3.3V, 133 MHz blind swap hot-plug:
– Four port host adapter cards with four ports each:
• FCP or FICON adapter ports
– Two device adapter cards with four ports each:
• Four FC-AL ports per card
• Two FC-AL loops per card

• Accesses cache through RIO-G internal bus

• Each adapter has its own PowerPC processor

• Owned by processors in LPAR

• Uses system power control network (SPCN)


– Controls and monitors the status of the power and cooling within the I/O enclosure
– Cabled as a loop between the different I/O enclosures

© Copyright IBM Corporation 2011


I/O enclosures

SPCN: System Power control network

© Copyright IBM Corporation 2011


DS8000 RIO-G port: Layout example

Up to four I/O enclosures in


same RIO-G loop

Up to 20 I/O enclosures to P5-


570 system

Max effective bandwidth:


• 2000 MB/SEC per RIO-G
loop,

Each RIO-G port can operate at 1 GHz in bidirectional mode and is capable of passing data in each
direction on each cycle of the port. Maximum data rate per I/O Enclosure: 4 GBps.
It is designed as a high performance self-healing interconnect.
The p5-570 provides two external RIO-G ports, and an adapter card adds two more.
Two ports on each processor complex form a loop.

Figure shows an illustration of how the RIO-G cabling is laid out in a DS8000 that has eight I/O drawers.
This would only occur if an expansion frame were installed.
The DS8000 RIO-G cabling will vary based on the model.

© Copyright IBM Corporation 2011


DS8000 device adapters (1 of 2)

Processor complex

Processor complex

© Copyright IBM Corporation 2011


DS8000 device adapters (2 of 2)
• Device adapters support RAID 5, RAID 6, or RAID 10

• FC-AL switched fabric topology

• FC-AL dual ported drives are connected to FC switch in the disk


enclosure backplane

• Two FC-AL loops connect disk enclosures to device adapters

• Array across loops is standard configuration option in DS8000


– Two simultaneous I/O ops per FC-AL connection possible
– Switched FC-AL or switched bunch of disks (SBOF) used for back-end access

• Device adapters are attached to a FC switch with the enclosure

• Four paths to each drive: 2 FC-AL loops X dual port access


© Copyright IBM Corporation 2011
FC device adapters with 2 Gbps ports
• DA performs RAID logic
– Offloads servers of that
workload

– Each port has up to five


times the throughput of
previous SSA-based DA
ports

– DS8000 configured for array


across loops (AAL)

– Eight RAID 5 or RAID 10


DDMs spread over two loops

© Copyright IBM Corporation 2011


Topic 7: Disk enclosures

© Copyright IBM Corporation 2011


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Disk subsystem
• The disk subsystem consists of three components:
– First: Located in the I/O enclosures are the device adapters
• There are RAID Controllers that are used by the storage to access the
RAID arrays

– Second: The DAs connect to switched controller cards in the disk


enclosures
• This creates a switched Fibre Channel disk network

– Finally, the disks themselves


• The disks are commonly referred to as disk drive modules (DDMs)

© Copyright IBM Corporation 2011


Device adapters

• The DA can operate at up to 2 Gbps


• The DAs are installed by pairs because
– Each storage partition requires its own adapter to connect the each
disk enclosure for redundancy

© Copyright IBM Corporation 2011


Disk enclosures

• Disk enclosures are installed in pairs


– One in front and on in back
• Starting with the Licensed Machine Code (LMC) level 5.4.xx.xx
– An intermix of 10K RPM and 15K RPM disks within the same enclosure in supported

© Copyright IBM Corporation 2011


DS8000 switched FC-AL technology
• Key features of switched FC-AL technology are:
– Standard FC-AL communication protocol from DA to DDMs
– Direct point-to-point links are established between DA and DDM
– Isolation capabilities in case of DDM failures (easy problem determination)
– Predictive failure statistics

• Key benefits of the DS8000 dual redundant switched FC-AL:


– Two independent networks to access the disk enclosures
– Four access paths to each DDM
– Each device adapter port operates independently
– Double the bandwidth over traditional FC-AL loop implementations

© Copyright IBM Corporation 2011


Switched FC-AL implementation

© Copyright IBM Corporation 2011


Switched FC-AL advantages
• DS6000 and DS8000 use switched FC-AL technology to link the device adapter (DA) pairs and the
DDMs.
• Switched FC-AL uses the standard FC-AL protocol, but the physical implementation is different.
• The key features of switched FC-AL technology are:
– Standard FC-AL communication protocol from DA to DDMs
– Direct point-to-point links are established between DA and DDM
• No arbitration and no performance degradation
– Isolation capabilities in case of DDM failures provide easy problem determination
– Predictive failure statistics
– Simplified expansion: No cable rerouting required when adding another disk enclosure
• The DS8000 architecture employs dual redundant switched FC-AL access to each of the disk
enclosures.
• The key benefits of doing this are:
– Two independent switched networks to access the disk enclosures
– Four access paths to each DDM in DS8000 architecture (dual switches)
– Each device adapter port operates independently
– Double the bandwidth over traditional FC-AL loop implementations
• Each DDM is attached to two separate Fibre Channel switches.
– This means that with two device adapters, we have four 2 Gbps effective data paths to each disk
• When a connection is made between the device adapter and a disk, the connection is a switched
connection that uses arbitrated loop protocol.
– This means that a mini-loop is created between the device adapter and the disk
– Results in four simultaneous and independent connections, one from each device adapter port

© Copyright IBM Corporation 2011


DS8000: Storage enclosure and DA cabling

© Copyright IBM Corporation 2011


Topic 8: Host adapters – DS8100/DS8300

© Copyright IBM Corporation 2011


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
DS8000 host adapters

Processor complex

Processor complex

© Copyright IBM Corporation 2011


ESCON host adapters
• The ESCON adapter is:
– A dual-ported host adapter for connection to older System z hosts
– The ports on the ESCON card use the MT-RJ type connector

• Control units and logical paths


– ESCON architecture recognizes only 16 3990 logical control units (LCUs)
• Even though the DS8000 is capable of emulating far more these extra Control Units
can by used by FICON
• An ESCON link consists of two fibers, one for each direction
• Each ESCON adapter card supports two ESCON ports, and port supports 64 logical
paths.

• ESCON distances
– Without repeaters, the ESCON distance are
• 2 km with 50 micron multimode fiber
• 3 km with 62.5 micron multimode fiber
– The DS8000 supports all models of the IBM 9032 ESCON directors
• That can be used to extend the cabling distances

© Copyright IBM Corporation 2011


Fibre Channel and FICON host adapters (1 of 2)
• Each Fibre Channel card offers
– Four Fibre Channel port at 2 or 4 Gbps (depending on the host adapter)
• Each 2 Gbps port independently auto-negotiates to either 2 or 1 Gbps link speed
• Each 4 Gbps port independently auto-negotiates to either 4 or 2 Gbps link speed

– Each port can be either FCP or FICON


• The ports are initially defined as switched point-to-point FCP for fabric topology but
can be configured as FC-AL for point-to-point topology
• A port cannot be both FCP and FICON simultaneously but it can be changed as
required

• Fibre Channel distances


– There are two types of host adapter cards you can select:
• Longwave: you can support a distance of up to 10 km (non-repeated)
• Shortwave: you are limited to a distance of 300 to 500 meters (non-repeated)
– All ports on each card must be either longwave or shortwave
• There can be no mixing of types within a card

© Copyright IBM Corporation 2011


Fibre Channel and FICON host adapters (2 of 2)

© Copyright IBM Corporation 2011


HA with four Fibre Channel ports
• Configured each as FCP or
FICON
– More FICON logical paths:
• ESS (1024) versus DS8000
(2048)
– One FICON channel addresses
16,384 devices
– One HA card covers all the 65,280
devices that a DS8000 supports
• (64k -256)

– Up to 16 HA into a DS8100 or 32 HA
into a DS8300
• 16 FICON channel ports to each
single device
• Current System z channel
subsystems limited to eight
channel paths per device

– Front end of:


• 128 ports for DS8300 (8 times
ESS)
• 64 ports for DS8100 (4 times
ESS)

© Copyright IBM Corporation 2011


DS8000 4 Gb host adapter performances
New 4 Gb host adapters are designed to improve by 50% single port throughput performance.

4 Gb / 2 GB HA performance comparison

© Copyright IBM Corporation 2011


Topic 9: Architecture – DS8100/8300

© Copyright IBM Corporation 2011


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
DS8000 frames
• Base frame:
– The base frame contains two processor complexes: eServer p5 570 servers
• Each of them contains the processor and memory that drive all functions within the
DS8000.
– The base frame can contain up to eight disk enclosures; each can contain up to
16 disk drives.
• In a maximum configuration, the base frame can hold 128 disk drives.
– The base frame contains four I/O enclosures.
• I/O enclosures provide connectivity between the adapters and the processors.
• The adapters contained in the I/O enclosures can be either device or host adapters
(DAs or HAs).
– The communication path used for adapter to processor complex communication is
the RIO-G loop.
• Expansion frames:
– Each expansion frame can hold up to 16 disk enclosures which contain the disk
drives.
• In a maximum configuration, an expansion frame can hold 256 disk drives.
– Expansion frames can contain four I/O enclosures and adapters if they are the
first expansion frame that is attached to either a model 932 or a model 9B2.
© Copyright IBM Corporation 2011
IBM System Storage DS8100: 2-way

Up to 128 disks

Power supplies
HMC

IBM eServer System p


POWER5 servers

Batteries I/O drawers

© Copyright IBM Corporation 2011


DS8100: Model 921 and 931 2-way
• Up to 16 host adapters (HAs)
– FCP/FICON HA: Four independent ports
– ESCON HA: Two ports
2 3
• Up to 4 device adapter (DA) pairs 2 3
– DA pairs 0 / 1 / 2 / 3 0 1
– Automatically configured from DDMs 0 1
C0 2
• Maximum configuration (384 DDMs) C1 2
– DA pair 0 = 128 DDMs 0
– DA pair 1 = 64 DDMs 0
b

– DA pair 2 = 128 DDMs 0/1 1/0


– DA pair 3 = 64 DDMs 2/3 3/2
– Balanced configuration at 256 DDMs:
In other words, 64 DDMs per DA pair
– DA (card) plugging order: 2 / 0 / 3 / 1
© Copyright IBM Corporation 2011
DS8300: 4-way with two expansion frames
HMC
Power supplies

Up to 640 Disks

p5 (POWER5) servers
Batteries I/O drawers

© Copyright IBM Corporation 2011


DS8300: Models 922, 932, 9A2, and 9B2 4-way
• Up to 32 host adapters Bad decision, same Good idea for a pool, 2
– FCP/FICON HA:
adapter adapters
Four independent ports
– ESCON HA: Two ports
• Up to eight DA pairs 3
2 6
– DA pairs 0 to 7
2 6 3
– Automatically configured from
DDMs 0 4 1
• Maximum configuration 0 4 1
(640 DDMs)
– DA pairs 1, 3-7 = 64 DDMs C0 7 2
– DA pairs 2, 0 = 128 DDMs C1 7 2
– Balanced configuration at 512 5 0
DDMs: In other words, 64
DDMs per DA pair 5 0
b
b

– DA (card) pair plugging order:


0/1 1/0 4/5 5/4
2/0/6/4/7/5/3/1
2/3 3/2 6/7 7/6

Bad decision, same


adapter
© Copyright IBM Corporation 2011
DS8300 with five frames

2 6 3 6 3
2 6 3 6 3
0 4 1 4 1
0 4 1 4 1
C0 7 2 7 0
C1 7 2 7 0
5 0 5 1
5 0 5 1
b

b b b

0/1 1/0 4/5 5/4


2/3 3/2 6/7 7/6 4/6 6/4

© Copyright IBM Corporation 2011


Topic 10: Cache management

© Copyright IBM Corporation 2011


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Sequential prefetching in adaptive replacement
cache
• SARC basically attempts to determine four things:
– When data is copied into the cache
– Which data is copied into the cache
– Which data is evicted when the cache becomes full
– How the algorithm dynamically adapts to different workloads

• SARC uses:
– Demand paging for all standard disk I/O
– Sequential pre-fetch for sequential I/O patterns

© Copyright IBM Corporation 2011


DS8000 caching using SARC
Benefits of adaptive replacement caching
• Best caching algorithms in industry
• Over 20 years of experience
• Features
– Self-learning algorithms
• Adaptively and dynamically learn what data should be
stored in cache based upon the recent access and
frequency needs of the hosts
– Adaptive replacement cache
• Most advanced and sophisticated algorithm to determine
what data in cache is removed to accommodate newer data
– Prefetching
• Predictive algorithm to anticipate data prior to a host
request and loads it into cache
1
• Benefits
– Leading performance 0.8

Cache Hit Ratio


• Been proven to improve cache hit by up to 100% over
previous IBM caching algorithms and improve I/O response 0.6 z/OS
time by 25% Open
0.4
– More efficient use of cache
• Intelligent caching algorithm profiles host access patterns to 0.2
determine what data is stored
• Need less cache than competitors 0
0 64 128 192 256
Cache Size (GB)
Nimrod Megiddo and Dharmendra S. Modha, "Outperforming LRU with an Adaptive Replacement Cache Algorithm," IEEE Computer, pp. 4-11, April
2004.
© Copyright IBM Corporation 2011
DS8000 caching using AMP
What is AMP?
• A breakthrough caching technology from IBM Research called Adaptive
Multi-stream Prefetching (AMP)
– Can dramatically improve performance for common sequential and batch
processing workloads.
• AMP optimizes cache efficiency by incorporating an autonomic,
workload-responsive, self-optimizing prefetching technology.
– The algorithm dynamically decides what to prefetch and when to prefetch.
– Delivers up to a two-fold increase in the sequential read capacity of RAID 5
arrays.
– The bandwidth for a fully configured DS8000 remains unchanged.
– May improve sequential read performance for smaller configurations and single
arrays.
– Reduces the potential for array hot spots due to extreme sequential workload
demands.
– May significantly reduce elapsed time for sequential read applications
constrained by array bandwidth such as BI and critical batch processing
workloads.

© Copyright IBM Corporation 2011


AMP doubles sequential read bandwidth for a
single RAID 5 array

450
Throughput (MB/sec)

400
350
300
250 6+P Raid-5
200 7+P Raid-5
150
100
50
0
Before AMP With AMP

© Copyright IBM Corporation 2011


Topic 11: Reliability, availability, and
serviceability

© Copyright IBM Corporation 2011


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Naming
• Storage complex
– A group of DS8000s managed by a single
management console
• A storage complex can consist of just a single
DS8000 storage unit
• Storage unit
– Consists of a single DS8000
• Including Expansion Frames
• Storage facility image
– A union of two logical partitions (processor LPARs)
• One from each processor complex
• Each LPAR hosts one server
• Logical partitions and servers
– A server is effectively the software that uses a
processor logical partition that have access to a
percentage of the memory and processor resources
available on a processor complex
• Processor complex
– Is one System p system unit
• Two processors complexes form a redundant pair
© Copyright IBM Corporation 2011
Processor complex RAS (1 of 2)
• The System p5 constitutes the processor complex
– Has the same RAS features as the P5-570
– In an integral part of the DS8000 architecture

• IBM System p5 RAS leadership features:


– Fault avoidance
• POWER5 systems are built to keep errors from ever happening
– First Failure Data Capture
• If a problem should occur, the ability to diagnose it correctly is a fundamental requirement
– Permanent monitoring
• A way to monitor the system even when the main processor is inoperable
– Mutual surveillance
• Can monitor the operation of the firmware during the boot process
• Can monitor the operating system for loss of control
– Environmental monitoring
• Related to power, fans, and temperature is performed by the System Power Control Network
(SPCN)
• Environmental critical and non-critical conditions generate Early Power-Off Warning (EPOW)
events
– Self-healing
• It able to recover from a failing component by first detecting and isolating the failed component
– It should then be able to take it offline, fix or isolate it, and reintroduce the fixed or replaced component

© Copyright IBM Corporation 2011


Processor complex RAS (2 of 2)
• IBM System p5 RAS leadership features: Continued
– Memory reliability, fault tolerance, and integrity
• Uses Error Checking and Correcting (ECC) circuitry for system memory
– To correct single-bit and to detect double-bit memory failures
– N+1 redundancy
• Allows the System p5 to remain operational by the use of redundant parts, in specifically:
– Redundant spare memory bits in L1, L2, L3, and main memory
– Redundant fans, and redundant power supplies
– Fault masking
• If corrections and retries succeed and do not exceed threshold limits, the system remains
operational with full resources (no client or IBM intervention is required)
– Resource deallocation
• If recoverable errors exceed threshold limits, resources can be deallocated with system
remaining operational
– Allowing deferred maintenance at a convenient time
– Concurrent maintenance
• Provides replacement of the following parts while the processor complex remains running:
– Disk Drives, cooling fans, power subsystems, and PCI-X adapter cards

© Copyright IBM Corporation 2011


Hypervisor: Storage image independent

© Copyright IBM Corporation 2011


Server RAS
• The DS8000 provides data integrity when performing write
operations and server failover.
– Metadata check:
• When application data enters the DS8000, special codes or metadata,
also known as redundancy checks, are appended to that data.
• The metadata remains associated with the application data as it is
transferred throughout the DS8000.
• The metadata is checked by various internal components o validate the
integrity of the data as it moves throughout the disk system.
• It is also checked by the D8000 before the data is sent to the host in
response to a read I/O request.
• The metadata also contains information used as an additional level of
verification.
– To confirm that the data returned to the host is coming from the desired
location on the disk

© Copyright IBM Corporation 2011


Server RAS: Server failover and failback
• Under normal operation:
– Both DS8000 servers are actively processing I/O requests.
• Each write is placed into the cache memory of the server owning the volume.
– And also into the NVS memory of the alternate server

• Failover:
– In case of one server failure, the remaining server is able to take over all of its
functions.
• RAID arrays which are connected to both servers can be accessed from the device
adapters of the remaining server.
• Since the DS8000 has only one copy of data in cache of remaining server, it will now
take the following steps:
– It de-stages the contents of its NVS to the disk subsystem.
– The NVS and cache of remaining server are divided in two,
> 50% for the odd LSSs and 50% for the even LSSs
– Remaining server now begins processing the writes (and reads) for all the LSSs.

• Failback:
– When the failed server has been repaired, failback process is activated.
• It starts in less than 8 seconds, will finish in less than 15 minutes, and is invisible to the
attached hosts.

© Copyright IBM Corporation 2011


Server RAS: Data flow
The normal flow of data for a write is:
1. Data is written to cache memory in the owning server.
2. Data is written to NVS memory of the alternate server.
3. The write is reported to the attached host as having been completed.
4. The write is de-staged from the cache memory to disk.
5. The write is discarded from the NVS memory of the alternate server.

© Copyright IBM Corporation 2011


Server RAS: Failover
The remaining server (server 1) executes the following steps:
1. It destages the contents of its NVS to the disk subsystem].
• However, before the actual destage and at the beginning of the failover:
– The working server starts by preserving the data in cache (backed by the remote NVS)
– In addition, the existing data in cache is added to the NVS so that it remains available
2. The NVS and cache of server 1 are divided in two.
• Half for the odd LSSs and halt for the even LSSs
3. Server 1 now begins processing the writes (and reads) for ALL the LSSs.

© Copyright IBM Corporation 2011


Server RAS: Failback
• After failover:
– Server 1 now owns all the LSSs.
• Which means all reads and writes will be serviced by server 1.
• The NVS inside server 1 is now used for both odd and even LSSs.

• When the failed server has been repaired and restarted:


– The failback process is activated.
– Server 1 starts using the NVS in server 0 again.
• And the ownership of the even LSSs is transferred back to server 0
– Normal operations then resume with both controllers active.
– Just like the failover process, the failback process is invisible to the attached
hosts.

• In general, recovery actions on the DS8000 do not impact I/O operation


latency by more than 15 seconds.
– With certain limitations on configurations and advanced functions, this impact to
latency can be limited to 8 seconds

© Copyright IBM Corporation 2011


NVS recovery after complete power loss
• DS8000 preserves fast writes:
– Using the NVS copy in the alternate server
– Battery backup ensures fast writes are not lost

• If both power supplies in the base frame were to stop:


– The servers are running on the batteries and immediately begin a shutdown procedure
1) All Host Adapter I/O is blocked.
2) Each server begins copying its NVS data to internal disk. For each server:
– Two copies are made of the NVS data in that server.
3) When a copy process is complete, each server shuts down.
4) When shutdown in each server is complete (or a timer expires), the DS8000 is powered
down.
– Scenario at power-on:
1) The processor complexes power-on and perform power-on self-tests.
2) Each server then begins boot-up.
3) At a certain stage of the boot process:
– The server detects NVS data on its internal SCSI disks and begin the destage it to the
FC-AL disks.
4) When battery units reach a certain level of charge, the servers come online.

Important note: The servers will not come online until the batteries are sufficiently
charged to handle at least one outage (typically within a few minutes).
© Copyright IBM Corporation 2011
Host connection availability: Single and multiple
path
Single pathed host

Dual pathed host

© Copyright IBM Corporation 2011


Host connection availability
• Multipathing software
– Each attached host operating system requires
• A mechanism to allow it to manage multiple paths to the same device
– And to preferably load balance these requests
– The mechanism that will be used varies by attached host operating system

• Subsystem device driver (SDD)


– IBM recommends the use SDD) to manage both path failover and
preferred path determination.
• In the majority of open systems environments
– SDD is a software product that IBM supplies free of charge within the
DS8000

• MPIO PCM
– Is also supported with AIX 5.2 ML5 (or later) and AIX 5.3 ML1 (or
later)
© Copyright IBM Corporation 2011
Disk subsystem: Disk path redundancy

© Copyright IBM Corporation 2011


Disk subsystem: RAID 5, RAID 6, and RAID 10
• RAID 5 array: Implementation in the DS8000
– It built on one array site will contain either seven or eight disks
• Depending on whether the array site is supplying a spare (7+P) or
(6+P+S)

• RAID 6 array: Implementation in the DS8000


– It built on one array site will contain either seven or eight disks
• Depending on whether the array site is supplying a spare (6+P+Q) or
(5+P+Q+S)

• RAID 10 array: Implementation in the DS8000


– It built on one array site will contain either six or eight disks
• Depending on whether the array site is supplying a spare (2 x 4) or
(2 x 3+S)

© Copyright IBM Corporation 2011


Disk subsystem: Spare disk
• Spare creation:
– A minimum of one spare disk is created for each array defined.
• Until the following conditions are met:
– A minimum of four spares per DA pair
– A minimum of four spares of the largest capacity array site on the DA pair
– A minimum of two spares of capacity and RPM greater than or equal to the fastest array site
of any given capacity on the DA pair

• Floating spare
– The DS8000 microcode might choose to allow the hot spare.
• To remain where it was been moved
• To migrate the spare to a more optimum position
– It might be preferable that a DDM that is currently in use as an array member is
converted to a spare.
• In this case, the data on that DDM will be migrated in the background onto an existing
spare.
– This process does not fail the disk that is being migrated:
> It does reduce the number of available spares until the migration process is
complete.

© Copyright IBM Corporation 2011


Disk subsystem: Miscellaneous
• Hot pluggable DDMs:
– Replacement of a failed disk does not affect the operation on a
DS8000
• Because the drives are fully hot pluggable

• Predictive failure analysis (PFA)


– Can anticipate certain forms of failures
• By keeping internal statistics of read and write errors

• Disk scrubbing
– The DS8000 will periodically read all sectors on a disk
• This is designed to occur without any interference with application
performance
– If ECC-correctable bad bits are identified, they are corrected
immediately
© Copyright IBM Corporation 2011
Power and cooling (1 of 2)
The DS8000 has completely redundant power and cooling

• Primary power supply (PPS)


– Each frame has two PPSs that produce voltages for two different areas
• 208V are produced to be supplied to each I/O enclosure and each processor complex.
• 12V and 5V are produced to be supplied to the disk enclosures.

• Battery backup units (BBU)


– Used for NVS (a part of the server’s memory)
– BBUs have a planned working life of a least four years
– The DS8000 will be available to run for up to 50 seconds on battery power
• Before the servers begin to copy NVS to SCSI disk and then shut down

• Rack cooling fans (cooling fan plenum)


– Located on each frame above the disk enclosures
– Draw air from the front of the DDMs and move it out through the top of the frame

© Copyright IBM Corporation 2011


Power and cooling (2 of 2)
• Rack power control card (RPC)
– A part of the power management infrastructure
– There are two RPC cards for redundancy
• Each card can independently control power for the entire DS8000

• Building power loss


– The DS8000 uses an area of server memory as nonvolatile storage
(NVS)
• This memory is used to hold data that has not been written to the disk
subsystem
– The DS8000 take action to protect that data in case of building power
failure

• Power fluctuation protection


– The DS8000 tolerates a power fluctuation for approximately 30 ms
• That means momentary interruption of power (often called brownout)
© Copyright IBM Corporation 2011
Microcode updates
• Concurrent Code Updates
– The architecture of the DS8000 allows for concurrent code updates
• This is achieved by using the redundant design of the DS8000
• Each server can hold three different versions of code

• Installation process:
– Internal S-HMC code update
– New DS8000 LMC downloaded on the internal S-HMC
– LMC uploaded from S-HMC to each DS8000 server internal storage
– New firmware can be loaded from S-HMC directly into each device
• May require server reboot with failover of its logical subsystems to the other server
– Update of servers operating system and LMC
• Each server updated one at a time with failover of its logical subsystems to the other
server
– Host adapters firmware update
• Each adapter impacted for less than 2.5 s, which should not affect connectivity
• Longer interruption managed by host’s multipathing software
© Copyright IBM Corporation 2011
DS8000 management console
• Hardware Maintenance Console (HMC)
– Used to perform configuration, management, and maintenance activities
– Can be ordered to be located either physically
• Inside the base frame, or
• External for mounting in a client-supplied rack
– Starting with the LMC 5.4.xx.xx, the HMC is able to work with IPv4, IPv6, or both

• Ethernet switches
– The DS8000 base frame contains two 16-port Ethernet switches
• They allow the creation of a fully redundant management network
– Each server and each HMC has a connection to each switch

• Remote support and Call Home


– Call Home is the capability of the HMC to contact IBM support services to report a
problem

© Copyright IBM Corporation 2011


Earthquake resistance kit
• An optional seismic kit for stabilizing the storage unit rack
– Help to prevent human injury
– Ensures that the system will be available following the earthquake
• By limiting the potential damage to critical system components (such as
hard drives)

© Copyright IBM Corporation 2011


Topic 12: DS8700

© Copyright IBM Corporation 2011


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Summary: DS8700 (2-way/4-way base frame)
242x model 941

HMC: New laptop

P6 (POWER6) servers

Feature specified 2-way(#4301)/4-way(#4302)


2-way processor memory size 32GB-128GB
4-way processor memory size 32GB-384GB
Power supplies 4-way required for expansion frame models

I/O drawers (slightly larger at 5U)


4 GB features only (no 2GB/ no ESCON)
Drawers attached through PCI-E switch
New host attachment and
Device adapter features (PCI-E)

Batteries

Familiar layout
© Copyright IBM Corporation 2011
DS8000 R5 – DS8700: Models
● DS8700 is the New brand name for the two new models
(base and expansion racks) that offer cost and
performance improvements over the previous models in
the DS8000 family.
– Previous brand names: DS8100 (2-way) and DS8300 (4-way)

● The DS8700 will be available in three configurations:


– DS8700: 2-way base rack (Model 941)
– DS8700: 4-way base rack (Model 941)
– DS8700: Expansion rack (Model 94E)
● DS8700 Model 941 and associated expansion unit model
94E to be ordered with a one-year, two-year, three-year,
or four-year warranty period.
– This flexibility enables you to select the option that best
addresses your business and financial needs.

● Corresponding machine types:


– 2421: One year warranty
– 2422: Two year warranty Designed for high performance,
– 2423: Three year warranty reliability, and enhanced connectivity
– 2424: Four year warranty

© Copyright IBM Corporation 2011


DS8700 hardware overview
● 2-way (Model 941)
– Two dual processor servers (POWER6)
• Up to 128 GB Cache (32, 64, or 128 GB)
– 8 to 64 2 Gb/4 Gb FC/FICON (4 ports adapters)
– 16 to 128 disks
– Physical capacity from 2,2 TB up to 128 TB

● 4-way (Model 941)


– Two four processor servers (POWER6)
• Up to 384 GB cache (32, 64, 128, 256 or 384 GB)
– 8 to 128 2 Gb/4 Gb FC/FICON (4 ports adapters)
– 16 to 1024 disks (with 4 expansion frames 94E)
– Physical capacity from 2,2 TB up to 1024 TB

● All models:
– Disks:
• 146 / 300 / 450 / 600 GB FC 15K rpm (encrypted or non-
encrypted)
• 2 TB SATA 7200 rpm
• 73 / 146 / 600 GB SSD (solid-state drives)
– RAID 5, 6, and 10 support
© Copyright IBM Corporation 2011
Comparison with previous models
DS8100 DS8300 DS8700
Expansion frame is only 2-way 4-way 2-way/4-way
supported on 4-way
DDMs 16-384 16-1024 16-128/16-1024
DS8700.
DDM interface FC-AL FC-AL FC-AL
The DS8700 does not RAID types RAID 5,6,10 RAID 5,6,10 RAID 5,6,10
offer ESCON host LUNs/CKDs 64K total 64K total 64K total
adapters.
Max N-port logins/port 510 510 510
Max process logins 2K 2K 2K
Max logical paths/CU 512 512 512
Max LUN 2 TB 2 TB 2 TB

32-128 GB // 1-4 GB
Cache // NVS 16-128 GB // 1-4 GB 32-256 GB // 1-8GB
32-384 GB // 1-8GB

DS8000 P5+ DS8000 P5+ DS8000 P6


Processor
2-way 4-way 2-way or 4-way

ESCON x2 ESCON x2
Host adapters FC(4 Gbps) x 4
FC (4 Gbps) x 4 FC (4 Gbps) x 4

Host adapter slots 16 32 16 / 32


Max host adapter ports 64 128 64 / 128

SCSI-4 Gb or 2 Gb SCSI- 4 Gb or 2 Gb SCSI- 4 Gb


Interface protocols
FCP/FICON FCP/FICON FCP/FICON

PPRC fabric FCP FCP FCP


DA slots 8 16 8 / 16

© Copyright IBM Corporation 2011


Hardware changes from DS8100/DS8300 to the
DS8700
• IBM POWER6-based controller
• PCI Express (PCIe) internal fabric
• New PCIe I/O drawer
• Updated device adapter
• Increased cache and NVS size

© Copyright IBM Corporation 2011


DS8000 topology: All models
Host2 Host3
Host1

SAN
SAN

Host Host Host Host Host Host


adapters adapters adapters adapters adapters adapters

High bandwidth, fault tolerant interconnect


High bandwidth, fault tolerant interconnect
N-way Volatile memory Volatile memory N-way
SMP SMP

Persistent memory Persistent memory

RAID RAID
adapters adapters

© Copyright IBM Corporation 2011


DS8300 architecture
Host

Server/CPU:
POWER5+ I/O fabric: RIO

4-way P5 570 2.2GHz 4-way P5 570 2.2GHz


L3 cache L3 Cache

Memory DIMMs P5 P5 Memory DIMMs

Memory DIMMs Memory DIMMs

Memory DIMMs Memory DIMMs

Memory DIMMs
P5 P5 Memory DIMMs

L3 cache L3 cache


= Updated in DS8700

© Copyright IBM Corporation 2011


DS8700 architecture
Host

POWER6 PCIe I/O


processor drawer PCIe point-to-point
connections

2/4-way P6 570 4.7Ghz 2/4-way P6 570 4.7Ghz


L3 cache L3 cache

Memory DIMMs P6 P6 Memory DIMMs

Memory DIMMs Memory DIMMs

Memory DIMMs

Memory DIMMs P6 P6 Memory DIMMs

Memory DIMMs
L3 cache L3 Cache

© Copyright IBM Corporation 2011


DS8700 improvements: I/O fabric
• Key difference with DS8100/DS8300 architecture: The RIO
loop in DS8700 will be isolated.
– Unlike DS8100/DS830, DS8700 I/O towers will be on a separate
point-to-point connection
– This isolates the server communication from I/O fabric failures, power
supply problems, and repair procedures.

• DS8700 will use a PCI-e I/O fabric instead of PCI-x. PCI


Express reduces the impact of hardware failures

© Copyright IBM Corporation 2011


Hardware changes in the DS8700

Advanced Gen2 PCIe


High bandwidth I/O fabric
Point-to-point cables

P6 servers
4.7Ghz 2/4-way

New PCIe I/O host bay


v
PCIe bridged adapters
New adapter processor

Front Rear

© Copyright IBM Corporation 2011


Advanced PCIe internal component interconnect
PCI Express is a interconnect technology designed to provide
universal connectivity for use as a chip-to-chip and chip-to-
adapter card interconnect.

• Performance
– Bi-directional, full duplex, low latency
– Symmetrical access to data > no performance skew

• Reliability:
– Bit error auto-correction with CRC detection

• Industry standard with 800 members in PCIe Special Interest


Group (SIG)

• Millions of lines of firmware support in existence


© Copyright IBM Corporation 2011
DS8700 improvements: Cluster
• POWER6 processor
– 4.7 GHz P6 570 CECs
• 941 2-way
– Up to 128 drives
> Base rack
• 941 4-way
– Up to 1024 drives
> Base rack plus up to four expansion frames (94E)
> First expansion rack contains additional I/O enclosures
• All expansion frames can be added concurrently

• Concurrent upgrade from 2-way to 4-way system

• Dedicated Ethernet adapters for TPC-R no longer


required/available
© Copyright IBM Corporation 2011
DS8700: Upgrade path
• Two models: Base 941 and expansion 94E

• 2-way base with first I/O tower pair feature standard:


– Maximum: 64 disks and 8 host adapters
– Enables lower entry price by not requiring second I/O tower pair
Non-disruptive upgrade path

• 2-way and need > 64 disks OR > 8 host adapters:


– Add second I/O tower pair feature (can be field added non-
disruptively)

• 4-way = 2-way base + processor card feature + second I/O


tower pair feature
• 4-way + first expansion frame (common cabling)

• 4-way + first expansion frame (common cabling)


Add expansion frames up to 1024 disks

© Copyright IBM Corporation 2011


DS8700 improvements: I/O drawer and fabric
• New PCIe I/O drawer attachment
– Same number of I/O drawers
• Four in base frame
• Four in first expansion frame
– Slot count for number of host/DA adapters and drives same as DS8300
• Two device adapters (DAs) and four host adapters (HAs) per I/O drawer
• No ESCON card option

• Point-to-point cable connections are PCIe I/O fabric instead of PCIx


– Performance improvement
• Two GBps full duplex per link
– PCIe 4x Gen2 cables used to reduce cable size
– PCIe reduces the impact of hardware failures
• Transient PCIe bit/CRC errors are automatically handled in hardware by
retransmission.
– Hard failures can be mitigated by PCIe lane fallback.
– Degraded PCIe link will continue to operate on reduced lanes.

© Copyright IBM Corporation 2011


DS8700 improvements: Device adapter
• Replace 750FX (0.5 GHz) processor with the 750GX (1 GHz)
processor

• Upgrade to PCIe interface for point-to-point connection

• Why it matters
– Throughput enhancement from PCIe upgrade
– Better performance in IOPS-sensitive workloads
• Enables better utilization of SSD drives

© Copyright IBM Corporation 2011


DS8700 improvements: Cache and NVS
• Cache options
– 941 2-way
• 32 GB to 128 GB Cache size NVS size
– 941 4-way 32 GB 1 GB
• 32 GB to 384 GB
64 GB 2 GB
• NVS sizes (static – based on
128 GB 4 GB
cache size)
– 1 GB – 12 GB 256 GB 8 GB

384 GB 12 GB

© Copyright IBM Corporation 2011


DS8700 R5 graphical user interface improvements
(1 of 2)

New System Summary panel


New System Summary panel
• Overview tab displays system
• Overview tab displays system
configuration and real-time
configuration and real-time
performance data for a five-
performance data for a five-
minute sampling interval
minute sampling interval

Updated
every
60sec.

Number of racks shown


matches racks in the storage
unit

© Copyright IBM Corporation 2011


DS8700 R5 graphical user interface improvements
(2 of 2)

• Hardware Explorer tab shows


• system
Hardware Explorer
hardware andtaba shows
system hardware and
mapping between logical a
mapping between logical
configuration objects and
configuration
drives objects
(for example, and
drives
drives (for example, drives
belonging to an extent pool or
belonging to an extent pool or
array)
R5.0 array)

• Real images show exactly


• what
Realbays
images
are show exactly
populated with
what bays are
what devices populated with
what devices

© Copyright IBM Corporation 2011


DS8700: Improving on DS8000
• Increase in performance
– Sequential and IOPs versus DS8300
• Improved flexibility
– Additional concurrent hardware upgrade capability
• Simplify and improved DS8000 design
– Simplified adapter card packaging, smaller cables
• Maintain quality of latest generation DS8000
– Minimize changes to higher level code
– Carry forward the same parts and their locations
• Bring forward features and advanced function in DS8000
– Built from DS8000 R4.2 code base
– Remote mirror and copy functions are interoperable

© Copyright IBM Corporation 2011


Rack components
Current DS8000
DS8700
A rack components
Rack component A rack components
Fans 2U
Fans 2U
changes limited to
Processor complex and
128 drives 14U PCIe I/O drawers. 128 drives 14U

HMC Laptop 1U HMC Laptop 1U


Ethernet switches 1U Ethernet switches 1U
Free 2U Free 2U
CEC 4U CEC 4U

CEC 4U CEC 4U
Free 4U Free 2U

IO pair 4U PCI-e I/O pair 5U

IO pair 4U
PCI-e I/O pair 5U

Rack power Rack power


Current DS8000 DS8700
B rack components B rack components

128 drives 14U 128 drives 14U

128 drives 14U 128 drives 14U

PCI-e I/O pair 5U

Free 2U
IO Pair 4U PCI-e I/O pair 5U

IO Pair 4U

Rack power © Copyright IBM Corporation 2011 Rack power


Topic 13: DS8800

© Copyright IBM Corporation 2011


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
The DS8800
Hardware changes from DS8700 to DS8800:
• Compact and highly efficiency drive enclosures
– New 2.5”, small-form-factor drives
– 6 Gbps SAS (SAS-2)
– New enclosures support 50% more drives
• Upgraded processor complexes
– IBM POWER6+ for faster performance
• Upgraded I/O adapters
– 8 Gbps host adapters
– 8 Gbps device adapters
• More efficient airflow
– Front-to-back cooling
– Aligns with data center best practices

DS8800 improvements:
• Supports more users with a single DS8800 with 40% better performance
• Save floor space almost twice the drive density
• Reduce costs with up to 36% less energy consumed

© Copyright IBM Corporation 2011


Summary: DS8800 (2-way/4-way base frame)
242x model 951

High density
enclosures,
2.5 in. drives

Primary
power
Management
supplies
Console

POWER6+
controllers

Batteries I/O drawers


8 GBps host
and device
adapters

Front view
(cover removed)

© Copyright IBM Corporation 2011


Rack configuration comparison overview
R5 – DS8700
• Storage enclosures
mount front and rear
• Chimney cooling, vents R6 – DS8800
out top
• Storage enclosures
• Copper Fibre Channel
mount single side only
cables
• Hot isle cool isle cooling,
• A rack = 8 storage
vents single side
enclosures
• Optical Fibre Channel
• B rack = 16 storage
cables
enclosures
• A rack = 10 Storage
• 1024 drives supported
enclosures
in five racks
• B rack = 14 Storage
enclosures
• C rack = 20 Storage
enclosures
• Initial offering supports
1056 drives in 3 racks
• (DS8800 architecture
supports 3072 drives in
eight racks)

© Copyright IBM Corporation 2011


Disk enclosure comparison overview

R5 – DS8700 megapack R6 – DS8800 gigapack


• Disk technology
• Disk technology
– 3.5” (LFF) Fibre Channel
– 2.5” (SFF) SAS
• Throughput • Throughput
– 2 Gbps FC interconnect backbone – 8Gbps FC interconnect backbone
– 2 Gbps FC to disks – 6Gbps SAS to disks
• Density • Density
– Supports 16 disks per enclosure – Supports 24 disks per enclosure
– 3.5U of vertical rack space – 2U of vertical rack space
• Cabling • Cabling
– Passive copper interconnect – Optical short wave multimode
• Modularity interconnect
– Rack level power • Modularity
– Rack level cooling – Integrated power
– Integrated cooling

© Copyright IBM Corporation 2011


DS8800 new disk enclosure close-up

Status
indicators

24 SFF disk drives

Dual trunking Redundant controller cards Redundant power supplies

© Copyright IBM Corporation 2011


DS8800 R6: Continued non-disruptive upgrades
• 2-way base with first I/O tower pair feature standard
– Enables lower entry price by not requiring second I/O tower pair

• 2-way - if > 64 drives OR if > 8 host adapters


– Add second I/O tower pair feature (can be field added non-
disruptively), 240 total drives
Non-disruptive upgrade path

• 4-way = 2-way base + processor card feature + second


I/O tower pair feature

• 4-way + first expansion frame (common cabling) 576 total


drives

• Add other expansion frames to 1056 total drives

© Copyright IBM Corporation 2011


DS8800 rack configuration: Expanded

A rack B rack C rack


Gigapack 2U Gigapack 2U Gigapack 2U
Gigapack 2U Gigapack 2U Gigapack 2U
Gigapack 2U Gigapack 2U Gigapack 2U
Gigapack 2U Gigapack 2U Gigapack 2U
Gigapack 2U 10 enclosures Gigapack 2U Gigapack 2U
Gigapack 2U 240 drives 20U Gigapack 2U Gigapack 2U
Gigapack 2U Gigapack 2U 14 enclosures Gigapack 2U
Gigapack 2U Gigapack 2U 336 drives 28U Gigapack 2U
Gigapack 2U Gigapack 2U Gigapack 2U
Rack Power

Rack Power
Rack Power

Gigapack 2U Gigapack 2U Gigapack 2U 20 enclosures


HMC Laptop 1U
Ethernet Switch 1U Gigapack 2U Gigapack 2U 480 drives 40U
Gigapack 2U Gigapack 2U
CEC 4U
Gigapack 2U Gigapack 2U
Gigapack 2U Gigapack 2U
CEC 4U
Reserved Gigapack 2U
Gigapack 2U
PCIe I/O 5U PCIe I/O 5U PCIe I/O 5U PCIe I/O 5U Gigapack 2U
Gigapack 2U

PCIe I/O 5U PCIe I/O 5U PCIe I/O 5U PCIe I/O 5U Gigapack 2U


Gigapack 2U

240 disks
576 disks
1056 disks

© Copyright IBM Corporation 2011


DS8700 and DS8800 comparison summary (1 of 2)
IBM System Storage DS8000 models at a glance

Models DS8700 (941, 94E) DS8800 (951, 95E)


Shared SMP processor configuration POWER6 dual 2-way or 4-way POWER6+ dual 2-way or 4-way
Other major processors PowerPC, ASICs PowerPC, ASICs

Processor memory for cache and NVS 32 GB/384 GB 32 GB/384 GB


(min/max)
Host adapter interfaces Four-port 4 Gbps Four- and eight-port 8 Gbps
FC/FICON FC/FICON

Host adapters (min/max) 2/32 2/16

Host ports (min/max) 8/128 8/128


Drive interface FC Arbitrated Loop (FC-AL) 6 Gbps SAS-2

Number of disk drives (min/max) 8/1024 16/1056

Device adapters Up to 16 four-port FC-AL Up to 16 four-port FC-AL

Maximum physical storage capacity* 1024 TB 634 TB


Disk sizes** 73 GB SSDs 300 GB SSDs
146 GB SSDs 146 GB (15,000 rpm)
600 GB SSDs 450 GB (10,000 rpm)
146 GB (15,000 rpm) 600 GB (10,000 rpm)
300 GB (15,000 rpm)
450 GB (15,000 rpm)
600 GB (15,000 rpm)
2 TB (7,200 rpm)

RAID levels 5, 6, 10 5, 6, 10

© Copyright IBM Corporation 2011


DS8700 and DS8800 comparison summary (2 of 2)
Models DS8700 (941, 94E) DS8800 (951, 95E)
Dimensions (height x width x depth) 193 x 84.7 x 118.3 cm per frame up 193.3 x 84.5 x 122.6 cm per frame up
to 5 frames total to 3 frames total

Maximum weight 1307 kg (2880 lb) base rack 1324 kg (2920 lb) base rack
Add per expansion frame: Add per expansion frame:
1089 kg (2400 lb) 1307 kg (2880 lb)

Dry bulb temperature 16 - 32°C (60 - 90°F) 16 - 32°C (60 - 90°F)


Relative humidity 20 - 80 percent 20 - 80 percent
Power supply Single-phase some configurations or Single-phase some configurations or
three-phase 50/60 Hz three-phase 50/60 Hz

Caloric value BTU/hr min/max 26,600 (941 rack) 26,000 (951 rack)
22,200 (94E rack) 21,000 (95E rack)

Electrical power kva min/max 7.8 (941 rack) 7.6 (951 rack)
6.5 (94E rack) 6.2 (95E rack)

Warranty Four years on type 2424 models Four years on type 2424 models
Three years on type 2423 models Three years on type 2423 models
Two years on type 2422 models Two years on type 2422 models
One year on type 2421 models One year on type 2421 models

Supported systems For more details on supported For more details on supported
servers, visit servers, visit
ibm.com/systems/uk/storage/disk . ibm.com/systems/uk/storage/disk

© Copyright IBM Corporation 2011


Unit summary
Having completed this unit, you should be able to:
• Discuss the hardware and architecture of the DS8000
• Use virtualization terminology describing the configuration of
the DS8000 subsystem
• Describe the physical hardware components and resources
• Describe the models and features provided by each model
• Describe the types of disk arrays that can be configured for
a DS8000 subsystem
• Describe the differences between the DS8100/DS8300 and
the DS8700
• Describe the differences between the DS8700 and the
DS8800

© Copyright IBM Corporation 2011

You might also like