Download as pdf or txt
Download as pdf or txt
You are on page 1of 73

Oracle ZFS Storage ZS9-2 Appliance

TOI for Services and Support

Chris Wells
Senior Principal Hardware Engineer
Storage, Virtualization, and Operating Systems (SVOS)
September 1, 2021

1 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
Safe harbor statement

The following is intended to outline our general product direction. It is intended for information
purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any
material, code, or functionality, and should not be relied upon in making purchasing decisions. The
development, release, timing, and pricing of any features or functionality described for Oracle’s
products may change and remains at the sole discretion of Oracle Corporation.

Copyright © 2021, Oracle and/or its affiliates | Oracle Confidential | For Internal and Authorized OPN Partners
2
Use Only
Oracle ZFS Storage ZS9-2 – TOI Topics

1. Overview
2. Architecture
3. Chassis Components
4. Network Adapters
5. Options and configuration
6. Service Considerations
7. ZS9-2 Racked System
8. Reference Materials

3 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
Oracle ZFS Storage ZS9-2 –
Overview

4 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
Oracle ZFS Storage ZS9-2 – Unified Controller Solution

ZS7-2 – 2U Controller ZS9-2 – 2U Controller

Continue unified controller architecture strategy


• Single platform architecture to satisfy broad range of applications
• Mid-Range and High-End configurations
• Plus hybrid configurations tailored for Engineered Systems
• Memory and CPU configuration options for price vs performance

5 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
Oracle ZFS Storage ZS9-2 – Based on Oracle Server X9-2L

Compute Availability
• Intel Whitley Platform • 2x hot-swappable 1200W PSUs
• 2x IceLake Xeon-SP CPUs • 4x dual counter-rotating fan modules
• Up to 32x DDR4-3200MT/s RDIMM • 20 second replacement time limit
• Up to 2.0 TB with 64 GB DIMMs
Management
I/O • Pilot-4 Service Processor
• 10x PCIe Gen4 slots (4 x16, 6 x8) • ILOM 5.0
• Two x16 slots electrically x8
• Solaris 11.4 based Appliance Kit OS
• 1x 1000BASE-T Ethernet Port NET0 • OS8.8.x
• 12x 3.5” Drive Bays – NVMe
• Top four require re-timer card
6 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
Platform Comparison – ZS7-2 to ZS9-2

ZS7-2 Mid-Range ZS9-2 Mid-Range ZS7-2 High-End ZS9-2 High-End


CPU 36 cores (Sky Lake) 48 cores (Ice Lake) 48 cores (Sky Lake) 64 cores (Ice Lake)
18C,2.3GHz,140W 24C,2.1GHz,165W 24C,2.1GHz,150W 32C,2.6GHz,250W
Memory 512 GB / 1 TB 512 GB / 1 TB 1.5 TB 2.0 TB
DDR4-2667 MT/s DDR4-3200 MT/s DDR4-2667 MT/s DDR4-3200 MT/s
Aggregate Mem BW 170.7 GB/s 204.8 GB/s 256 GB/s 409.6 GB/s
System Disk 2x 14TB SAS3 HDD 2x 3.84TB NVMe SSD 2x 14TB SAS3 HDD 2x 3.84TB NVMe SSD
Form Factor 4RU HA/2RU each 4RU HA/2RU each 4RU HA/2RU each 4RU HA/2RU each
Clustering ClustronV3 - CIO 10GBASE-T - LIO ClustronV3 - CIO 10GBASE-T - LIO
Integrated Ports 1x 1000BASE-T 1x 1000BASE-T + 1x 1000BASE-T 1x 1000BASE-T +
2x 10GBASE-T 2x 10GBASE-T
PCIe Expansion 5 slots – 4x8, 1x16 5 slots – 3x8, 2x16 5 slots – 4x8, 1x16 5 slots – 3x8, 2x16
Gen3 Gen4 Gen3 Gen4
Available IO BW 56.9 GB/s 94.9 GB/s (109.1) 71.1 GB/s 109.1 GB/s (123.3)
SAS-3 HBAs Two 4x4 port ext Two 4x4 port ext Four 4x4 port ext Four 4x4 port ext
SAS Cable Media Optical / Copper Optical / Copper Optical / Copper Optical / Copper
DE3 DE2 DE3 DE2 DE3 DE2 DE3 DE2
Max Enclosures 24 24 48 48

7 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
IO adapter options – ZS7-2 to ZS9-2

ZS7-2 ZS9-2
10GBASE-T Intel Fort Pond – 4x10GBASE-T (v1) Intel Fort Pond – 4x10GBASE-T (v2)
10GbE (SFP+) Intel Fortville – 4x10 mode nVidia CX5 Dual Port SFP28
Broadcom Whitney+ Dual Port SFP28 Broadcom Whitney+ Dual Port SFP28
(with SFP+ transceiver) (with SFP+ transceiver)
25GbE (SFP28) Broadcom Whitney+ Dual Port SFP28 nVidia CX5 Dual Port SFP28
Broadcom Whitney+ Dual Port SFP28
(with SFP28 transceiver)
32Gb FC (SFP28) Marvell Dual Port 32Gb SFP28 FC Marvell Dual Port 32Gb SFP28 FC
40GbE (QSFP+) Intel Fortville – 2x40 mode (v1) Intel Fortville – 2x40 mode (v2)
100GbE (QSFP28) nVidia CX5 Dual Port QSFP28 nVidia CX5 Dual Port QSFP28
InfiniBand nVidia CX3 Dual Port QDR IB No InfiniBand Support

8 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
New storage devices – DE3-24C and DE3-24P

Current Releasing with ZS9-2


LFF SAS3 HDD WDC Leo-B 14TB WDC Paris-C 18TB
DE3-24C
SFF SAS3 HDD Seagate SkyBolt 10k 1.2TB Seagate SkyBolt 10k 1.2TB
DE3-24P Minimum FW ORAA
SAS3 SSDs Samsung 200GB, 7.68TB Samsung 200GB, 7.68TB
DE3-24C/24P PM1643a PM1643a

New spare PN released for SkyBolt for ZS9-2 application


• Migration of enclosures with SkyBolt from legacy system will need update to
ORAA before connection to ZS9-2 system
ZS9-2 continues to support all legacy devices supported in DE3 enclosures
• Support for DE2 enclosures is limited to those devices also supported in DE3

9 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
Supported storage devices – DE2 enclosures

Device Supported with ZS9-2


LFF HDD H7280A520SUN8.0T - Aries He
H7280B520SUN8.0T - Libra
H7280B524SUN8.0T - Libra SE (EAEU)
SFF HDD H101860SFSUN600G - Cobra F
H101812SFSUN1.2T - Cobra F
H1018124FSUN1.2T - Cobra F SE (EAEU)
Log Cache HSCAC2DA6SUN200G - SunsetCove+
HBCAC2DH6SUN200G - BearCove
HBSAC2DH6SUN200G - BearCove SE (EAEU)
HPCAC2DH6ORA200G - BearCove +
HPSAC2DH6ORA200G- Bearcove+ SE (EAEU)
Read Cache HSCAC2DA2SUN1.6T - SunsetCove +
HBCAC2DH2SUN3.2T - BearCove
HBSAC2DH2SUN3.2T - BearCove SE (EAEU)

10 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
Oracle ZFS Storage ZS9-2 –
Architecture

11 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
ZS9-2 controller architecture – Oracle Server X9-2L Block Diagram

Eight memory channels per CPU


NVMe bays direct connect to CPUs
• PCIe Gen4
48 lanes PCIe Gen4 per CPU for rear slots
• Eight x8, two x16 total

12 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
ZS9-2 controller architecture – Oracle Server X9-2L Block Diagram

Eight memory channels per CPU


NVMe bays direct connect to CPUs
• PCIe Gen4
48 lanes PCIe Gen4 per CPU for rear slots
• Eight x8, two x16 total

Internal SAS HBA, PCIe Re-Timer, and M.2


SATA devices not applicable in ZS9-2
Controller

13 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
ZS9-2 controller architecture – Oracle Server X9-2L Block Diagram

Eight memory channels per CPU


NVMe bays direct connect to CPUs
• PCIe Gen4
48 lanes PCIe Gen4 per CPU for rear slots
• Eight x8, two x16 total

Internal SAS HBA, PCIe Re-Timer, and M.2


SATA devices not applicable in ZS9-2
Controller

PCIe slots 1 and 2 have x16 physical socket


connectors to accommodate x16 adapters
• Wired electrically for x8 interface

14 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
ZS9-2 controller architecture – chassis front

Similar front configuration to ZS7-2 controllers


• System drives in drive bays 0 and 1
• System drives now NVMe SSDs while SAS in previous generations
• Intel 3.84TB NVMe Gen4 in Coral-D adapter bracket

15 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
ZS9-2 controller architecture – chassis rear HE only, fillers in MR

10 total PCIe slots compared to 11 in ZS7-2


• Central slot between 5 and 6 reserved for RoT in OCI applications Network ports
• Lose one slot but gain one back with NVMe system drives
Cluster ports
Four slots reserved for 4x4 port external SAS-3 HBAs (Thebe3)
• HBAs installed to 4,5,8,9 for High-End (HE), slots 4 and 9 for Mid-Range(MR)
• Slots 5 and 8 empty / reserved in MR configuration
Slot 6 reserved for Quad Port 10GBASE-T (Fort Pond) NIC used for clustering
• Ports 0 and 1 (bottom ports) used for cluster connections
• Ports 2 and 3 (top ports) available for customer network IO

16 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
Oracle ZFS Storage ZS9-2 –
Chassis Components

17 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
ZS9-2 FRUs and CRUs

Hot Serviceable CRUs Cold Serviceable FRUs


• Power Supply Units • CPUs / heat sinks
• Fan modules (remove top cover) • Motherboard
• 20 second time limit to replace fan module • Disk backplane
Cold Serviceable CRUs • Internal NVMe cable assemblies
• PCIe adapters • Front indicator module (FIM)
• Memory DIMMs • Temperature sensor
• Coin Cell Battery – CR2032 Hot Serviceable FRU
• NVMe system drives
• Current appliance kit software does not support
surprise removal of NVMe devices therefore
service must be engaged to safely manage offline
and replacement of device without inducing
system panic

18 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
ZS9-2 chassis mechanical features

Top cover release lever PCIe bracket retainer


• Torx T15 to lock / unlock

19 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
ZS9-2 chassis internals
Call Out Description
1 Two system disks, 10 fillers
2 Disk backplane
3 System chassis
4 FIM and temp sensor
5 Fan modules
6 Fan tray
7 Motherboard assembly
8 Processors and heatsinks
9 Top cover
10 PCIe cards
11 System battery
12 Power supplies
13 Air baffle
14 DIMMs
20 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
ZS9-2 chassis component locations

21 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
ZS9-2 LEDs and switches – Front
Call Out Description
2 Locate Button/LED (white)
3 Fault / Service Required (amber)
4 System OK (green)
5 On/Standby Button
6 Top Fan Fault (amber)
7 Rear PS Fault (amber)
8 Over-temp Fault (amber)
9 SP OK (green)
10 “Do Not Service” (white)
- Not used in appliance application
11 Ready to Remove (blue) 11
12 Fault / Service Required (amber)
12
13 Link / Activity (green)
13
22 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
ZS9-2 LEDs and switches – Rear

PS1

PS0

1 2 3 4 5 6

Call Out Description


1 PS Status: Top: Fault=amber, Bottom: AC OK=green
2 NET MGT status: Left: Link/activity=green, Right: Speed: green=1000M, off=100M/10M
3 NET0 status: Left: Link/activity=green, Right: Speed: green=1000M, off=100M/10M
4 System Status LEDs: Locate Button/LED: white; Fault-Service Req’d: amber; Power OK: green
5 Pinhole switch: SP reset
6 Serial Management Port: ZS9-2 configured 9600/8N1; 115200/8N1 factory reset

23 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
Oracle ZFS Storage ZS9-2 –
Network Adapters

24 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
ZS9-2 network adapters – 10GBASE-T

Intel “Fort Pond” Quad Port 10GBASE-T Ethernet


• RJ45 ports for CAT6A twisted pair Ethernet cables
• PCIe Gen3 x8
• New part numbers from Fort Pond used in ZS7-2
• v2 card includes minimum FW version compatible
with X9-based platforms
• 800090AF
• One card fixed in ZS9-2 base configuration
• Two ports used for clustering, two ports available for IO
• Max quantity six including the one in base

25 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
ZS9-2 network adapters – 40Gb Ethernet

Intel “Fortville” Dual Port 40Gb Ethernet


• Two QSFP+ ports for Twin-ax copper or transceivers
• PCIe Gen3 x8
• New part numbers from Fortville used in ZS7-2
• Only 2x40 mode supported in ZS9-2
• Specific PNs for card preconfigured 2x40
• 4x10 mode not supported in ZS9-2
• Port mode changes not supported in the field
• v2 card includes minimum FW version compatible
with X9-based platforms
• 80009173
• Max quantity five

26 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
ZS9-2 network adapters – 10/25Gb Ethernet

nVidia Dual Port SFP28 – “CX5-25G” Broadcom Dual Port SFP28 – “Whitney+”
• Two SFP28 ports for Twin-ax copper or • Two SFP28 for Twin-ax copper or
transceivers transceivers
• PCIe Gen3 x8 • PCIe Gen3 x8
• Supports mixed modes 10G / 25G • Does not support mixed modes
• Supports dual-rate 10/25 transceivers • Both ports either 10G or 25G
• Max quantity five • Does not support dual-rate transceivers
• Currently supply constraints / long • Max quantity five
lead times • Temporary option for ZS9-2 to mitigate
supply constraints for CX5-25G
27 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
ZS9-2 network adapters – 100Gb Ethernet

nVidia Dual Port QSFP28 – “CX5-100G”


• Two QSFP28 ports for Twin-Ax or transceivers
• PCIe Gen4 x16
• Same card as supported on ZS7-2
• Max quantity two – only two true x16 slots
• May qualify up to four as a sustaining activity
• Two more physical x16 slots with x8 bus for higher
100G port count with diminished throughput

28 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
ZS9-2 network adapters – Fibre Channel

Marvell (Qlogic) Dual Port 32Gb FC


• Gen 6 FC
• Two SFP28 ports supports 32/16/8 Gb FC
• PCIe Gen3 x8
• Same card as supported on ZS7-2
• Max quantity five

29 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
Oracle ZFS Storage ZS9-2 –
Configuration Options

30 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
ZS9-2 processor and memory configurations
ZS9-2 Mid-Range ZS9-2 High-End ZS9-2 PCA 3.0
CPU 2x G5318Y, 24C, 2.1GHz 2x P8358, 32C, 2.6GHz 2x G5318Y, 24C, 2.1GHz
Config 165W 250W 165W
Memory 512 GB: 8x 64GB RDIMM 2048 GB: 32x 64GB RDIMM 1024 GB: 16x 64GB RDIMM
Config • P0/P1: D2,D6,D9,D13 (blk) • P0/P1: D0,D2,D4,D6,D9,D11,D13,D15 (blk) • P0/P1: D0,D2,D4,D6,
• P0/P1: D1,D3,D5,D7,D8,D10,D12,D14 (wht) D9,D11,D13,D15 (blk)
1024 GB: 16x 64GB RDIMM
• P0/P1: D2,D6,D9,D13 (blk) (all sockets) (all black sockets)
• P0/P1: D3,D7,D8,D12 (wht)

MR: half channels unpopulated


• D0/D1, D4/D5, D10/D11, D13/D14
• 512GB only black sockets
• 1024GB black and white

HE: 2.0 TB – all sockets populated

PCA: 1.0 TB – all black sockets populated

31 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
ZS9-2 PCIe configuration – Mid-Range

2nd 3rd 5th Thebe3 Fort Pond 1st Thebe3 4th


x8 x8 x8 (base) (base) x8 (base) x8

4th 3rd 1st Unused Cluster Unused 2nd


x16(8) x16(8) x16 (filler) Ports (filler) x16

Base: Thebe3 external HBA slots 4 and 9; Fort Pond slot 6; slots 5 and 8 unused
Install x16 adapters: slot 10 then 3 (then 2 then 1 pending post-RR qualification)
Install x8 adapters: slot 7 then 1 then 2 then 10 then 3

32 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
ZS9-2 PCIe configuration – High-End

2nd 3rd 5th Thebe3 Fort Pond 1st Thebe3 4th


x8 x8 x8 (base) (base) x8 (base) x8

4th 3rd 1st Thebe3 Cluster Thebe3 2nd


x16(8) x16(8) x16 (base) Ports (base) x16

Base: Thebe3 external HBA slots 4, 5, 8, and 9; Fort Pond slot 6


Install x16 adapters: slot 10 then 3 (then 2 then 1 pending post-RR qualification)
Install x8 adapters: slot 7 then 1 then 2 then 10 then 3

33 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
ZS9-2 PCIe configuration – PCA 3.0

Thebe3 Fort Pond CX5-25G Thebe3


x8

CX5-100G Thebe3 Cluster Thebe3 CX5-100G


x16 Ports x16

Fixed Configuration:
• Thebe3 external HBA slots 4, 5, 8 and 9; Fort Pond slot 6
• CX5-100G slots 3 and 10; CX5-25G slot 7
• Slots 1 and 2 unpopulated
34 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
ZS9-2 PCIe option card installation priority
PCIe adapter options to be installed according to their priority
• Assures consistent population from factory for identically configured system
Code Install Product PCIe Driver Max Description
Priority capability Q
- Required Microchip Thebe-3 SAS-3 HBA Gen3 pmcs 2/4 External 16-lane SAS-3 HBA
x8 2x fixed mid-range config
4x fixed high-end config
- Required Intel Fort Pond Gen3 i40e 1 LIO Cluster interface adapter
4x10GBASE-T (LIO) x8  2x 10GBASE-T
 Additional ports available for client IO
A 1st nVidia CX5 Dual Port 100Gb Ethernet Gen4 mlxne 2* Dual 100Gb Ethernet QSFP28
x16
B 2nd nVidia CX5 Dual Port 10/25GbE Gen3 mlxne 5 Dual 25Gb Ethernet SFP28
-or- x8 -or- 10GbE using SFP+
Broadcom Whitney+ Dual Port 10/25GbE bnxt * Whitney+ support at RR temporary to
mitigate CX5-25G supply constraints
C 3rd Intel Spirit Falls 2x40Gb Enet Gen3 i40e 5 Dual 40Gb Ethernet QSFP+
x8 (aka “Fortville”)
D 4th Marvell Narvi 32Gb FC HBA Gen3 qlt 5 Dual 32Gb FC adapter
x8 32 Gb FC SR optics
E 5th Intel Fort Pond 4x10GBASE-T Gen3 i40e 5 Quad 10GBASE-T
x8 (+1 LIO)

* CX5-100G remains at MaxQ 2x pending future qualification of 4x


35 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
ZS9-2 backend storage configuration
Backend storage continues to employ DE3-24P and DE3-24C as in previous generations
• The following storage device configurations are supported in new ZS9-2 orders

Enclosure Enclosure Configuration


DE3-24C (disk-only) 24 x 3.5” 7,200 RPM SAS-3 WDC Paris-C 18 TB
DE3-24C (LZ/RZ-capable) 20 x 3.5” 7,200 RPM SAS-3 WDC Paris-C 18 TB
DE3-24P (disk-only) 24 x 2.5” 10k RPM SAS-3 Seagate SkyBolt 1.2 TB
DE3-24P (LZ/RZ-capable) 20 x 2.5” 10k RPM SAS-3 Seagate SkyBolt 1.2 TB
DE3-24P (SSD-only) 24 x 2.5” SAS-3 SSD Samsung PM1643a 7.68 TB
DE3-24P (LZ-capable) 20 x 2.5” SAS-3 SSD Samsung PM1643a 7.68 TB

36 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
ZS9-2 backend storage cabling

Cabling between HBA and DE3 enclosures with SFF-8644 Active Optical Cables (AOC) only
• AOC cables solve cable management challenges within 2RU envelope
• Only DE3 enclosures support AOC cables, DE2 must use copper
Daisy chain cables between DE3 enclosures remain passive copper
• One run of 6m or 20m AOC allowed to span racks for DE3 to DE3 connection within a chain
Cabling to DE2 from HBA use same mini-SAS to mini-SAS HD copper cables
• mini-SAS = SFF-8088, mini-SAS HD = SFF-8644

AOC cable Copper cable Copper cable


HBA to DE3 DE3 to DE3 HBA to DE2

37 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
ZS9-2 storage cabling configuration example

Same cabling rules and guidelines as introduced with DE3 enclosures on ZS5 series
• Only AOC cables between HBA and DE3 IOM ports for ZS7-2 and now ZS9-2
• Maximum chain depth six enclosures
• Mid-Range supports up to four chains for maximum 24 enclosures
• High-End supports up to eight chains for maximum 48 enclosures

38 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
ZS9-2 cluster cabling

ZS9-2 introduces direct Ethernet-based clustering – LIO


• Departure from serial UART clustering using Clustron cards – CIO
• Direct point to point 10 Gb connections using lower two ports (0 and 1) of Fort Pond NIC in slot 6
• Remaining ports (2 and 3) available for client IO interfaces (10GBASE-T)

39 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
ZS9-2 cluster configuration

AK software interfaces around cluster configuration have changed with the new LIO clustering
scheme for ZS9-2
• Legacy systems with Clustron CIO clustering remain the same

hostname:configuration cluster> links hostname:configuration cluster> links

clustron3_ng3:0/clustron_uart:0 = AKCIOS_ACTIVE lio_dev/i40e0 = AKCIOS_ACTIVE


clustron3_ng3:0/clustron_uart:1 = AKCIOS_ACTIVE lio_dev/i40e1 = AKCIOS_ACTIVE
clustron3_ng3:0/dlpi:0 = AKCIOS_ACTIVE

40 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
Oracle ZFS Storage ZS9-2 –
Service Considerations

41 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
ZS9-2 – tools required for service

Antistatic wrist strap


Antistatic mat
No. 2 Phillips screwdriver
• FIM / Temp sensor: module and cable access cover
Torx (6 lobe) T15, T25, and T30 drivers
• T15 – top cover lever latch, disk backplane
• T25 – fan tray, backplane bracket, motherboard mid-wall
• T30 – processor-heatsink module
Medium flat-blade screwdriver
• Separate processor carrier from heatsink
In-lb (inch-pounds) torque driver
• 8.0 in-lb torque for processor-heatsink modules to motherboard sockets
ESD gloves recommended for processor replacement – do not use latex or vinyl

42 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
ZS9-2 – NVMe system drive replacement

ZS9-2 utilizes NVMe SSDs for system / boot drive purposes


• At the time of ZS9-2 product launch, the AK software is vulnerable to panic in the event of
surprise removal of an NVMe device, therefore system drives are treated as FRU only until such
time we have robust handling of surprise NVMe device removal in a future release of AK software

43 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
ZS9-2 – NVMe system drive replacement

ZS9-2 utilizes NVMe SSDs for system / boot drive purposes


• At the time of ZS9-2 product launch, the AK software is vulnerable to panic in the event of
surprise removal of an NVMe device, therefore system drives are treated as FRU only until such
time we have robust handling of surprise NVMe device removal in a future release of AK software
• Customer documentation has always indicated to not remove a system disk if the blue ready to
remove LED is not illuminated although systems with SAS boot drives can often tolerate a
surprise removal

44 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
ZS9-2 – NVMe system drive replacement

ZS9-2 utilizes NVMe SSDs for system / boot drive purposes


• At the time of ZS9-2 product launch, the AK software is vulnerable to panic in the event of
surprise removal of an NVMe device, therefore system drives are treated as FRU only until such
time we have robust handling of surprise NVMe device removal in a future release of AK software
• Customer documentation has always indicated to not remove a system disk if the blue ready to
remove LED is not illuminated although systems with SAS boot drives can often tolerate a
surprise removal
• An additional caution has been included in service manual reiterating to not remove an NVMe
system device if the blue LED is not illuminated / is not ready to remove

45 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
ZS9-2 – NVMe system drive replacement

At the time of ZS9-2 launch, the AK software cannot automatically transition the NVMe devices into
the ready to remove state
• Support shall be engaged in any ZS9-2 NVMe SSD replacement activity until AK software can
provide more robust handling
• A hidden appliance kit workflow has been integrated to the software to facilitate safe removal of
ZS9-2 system drive. An internal KM article details the process
• Oracle ZFS Storage Appliance ZS9-2 Replacing the NVME System Boot Drives (Doc ID 2784034.1)
• Remove Drive operation safely offlines the device and performs hot-plug power off to achieve ready-to-remove
• Get Slot Status indicates fault and ready-to-remove status
• Insert Drive will explicitly re-enable power and online the device
• System will normally take care of this automatically
• Use only in case slot status does not indicate ENABLED
This procedure is intended only for FE on-site
• Not for direct customer use

46 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
ZS9-2 – LIO alerts vs hardware faults

Migration of clustering protocol from Clustron to LIO improves diagnosibility of hardware faults
related to clustering
• Standard FMA faults for standard NIC device
• PCIEX-8000-xx, SPINTEL-8000-xx
• Standard PCIe adapter faults diagnosed by AK (PCIEX) or ILOM (SPINTEL) aligned with hardware fault conditions
• Possibly faulty hardware or transient/physical events affecting bus integrity
• Faults indict specific adapter card or multiple suspects CPU / MB / card
• NIC-8000-xx
• Standard NIC adapter faults diagnosed by AK / driver
• Rarely result of faulty hardware but can occur during firmware update activities. Acquit and cycle power
• Familiar clustering alerts – related to cluster / link state
• AK-8000-xx events
• Often transient alerts coinciding with clustering events like takeover / failback or controller reset/reboot
• External Ethernet link integrity: physical cable or port conditions
• Clustering alerts do not indict PCIe components as suspect

47 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
ZS9-2 – Memory DIMM faults and map-out

X9-2/L platform introduces more stringent Intel memory configuration rules


• Intel supports a smaller set of memory configurations to preserve optimal performance while
reducing scope of qualified permutations
• Result of these changes is that the impact of a single DIMM failure can result in multiple DIMMs
being mapped out or “fenced” by the system
Intel supported DIMM quantity per CPU socket: 1,2,4,6,8,12,16
• ZS9-2 Examples Note: quantities per CPU socket, configuration of remaining socket unaffected
• In ZS9-2 HE 16 DIMM configuration, one DIMM failure fences the suspect DIMM, its channel partner, and the
partner channel to reduce active DIMMs to 12
• In ZS9-2 MR 8 DIMM configuration, a DIMM failure reduces active DIMMs to 4
• In ZS9-2 MR 4 DIMM configuration, a DIMM failure reduces active DIMMs to 2
Devices are not mapped out until a system reset subsequent to fault diagnosis
• For correctable error threshold fault, all DIMMs remain active until next reset or fatal /
uncorrectable error
The actual suspect DIMM is identified in AK / ILOM fault telemetry and also indicated correctly with
fault-remind indicator

48 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
ZS9-2 – Memory DIMM faults and map-out

hostname01:> maintenance hardware select chassis-000 select memory show hostname01:> maintenance hardware select chassis-000 select memory show
Memorys: Memorys:

LABEL STATE MANUFACTURER MODEL SERIAL LABEL STATE MANUFACTURER MODEL SERIAL
memory-000 DIMM 0/13 ok Samsung 65536MB DDR4 SDRAM DIMM 2102-23EC6C7B memory-000 DIMM 0/13 ok Samsung 65536MB DDR4 SDRAM DIMM 2102-23EC6C7B
memory-001 DIMM 0/12 ok Samsung 65536MB DDR4 SDRAM DIMM 2102-23EC6DB6 memory-001 DIMM 0/12 ok Samsung 65536MB DDR4 SDRAM DIMM 2102-23EC6DB6
memory-002 DIMM 0/15 ok Samsung 65536MB DDR4 SDRAM DIMM 2102-23EC6E04 memory-002 DIMM 0/15 faulted Samsung 65536MB DDR4 SDRAM DIMM 2102-23EC6E04
memory-003 DIMM 0/14 faulted Samsung 65536MB DDR4 SDRAM DIMM 2102-23EC6C33 memory-003 DIMM 0/14 faulted Samsung 65536MB DDR4 SDRAM DIMM 2102-23EC6C33
memory-004 DIMM 0/9 ok Samsung 65536MB DDR4 SDRAM DIMM 2102-23EC6C81 memory-004 DIMM 0/9 ok Samsung 65536MB DDR4 SDRAM DIMM 2102-23EC6C81
memory-005 DIMM 0/8 ok Samsung 65536MB DDR4 SDRAM DIMM 2102-23EC6EED memory-005 DIMM 0/8 ok Samsung 65536MB DDR4 SDRAM DIMM 2102-23EC6EED
memory-006 DIMM 0/11 ok Samsung 65536MB DDR4 SDRAM DIMM 2102-23EC6E05 memory-006 DIMM 0/11 ok Samsung 65536MB DDR4 SDRAM DIMM 2102-23EC6E05
memory-007 DIMM 0/10 ok Samsung 65536MB DDR4 SDRAM DIMM 2102-23EC6EEF memory-007 DIMM 0/10 ok Samsung 65536MB DDR4 SDRAM DIMM 2102-23EC6EEF
memory-008 DIMM 0/2 ok Samsung 65536MB DDR4 SDRAM DIMM 2102-23EC6D31 memory-008 DIMM 0/2 ok Samsung 65536MB DDR4 SDRAM DIMM 2102-23EC6D31
memory-009 DIMM 0/3 ok Samsung 65536MB DDR4 SDRAM DIMM 2102-23EC6E0B memory-009 DIMM 0/3 ok Samsung 65536MB DDR4 SDRAM DIMM 2102-23EC6E0B
memory-010 DIMM 0/0 ok Samsung 65536MB DDR4 SDRAM DIMM 2102-23EC6DCD memory-010 DIMM 0/0 faulted Samsung 65536MB DDR4 SDRAM DIMM 2102-23EC6DCD
memory-011 DIMM 0/1 ok Samsung 65536MB DDR4 SDRAM DIMM 2102-23EC6E32 memory-011 DIMM 0/1 faulted Samsung 65536MB DDR4 SDRAM DIMM 2102-23EC6E32
memory-012 DIMM 0/6 ok Samsung 65536MB DDR4 SDRAM DIMM 2102-23EC6E40 memory-012 DIMM 0/6 ok Samsung 65536MB DDR4 SDRAM DIMM 2102-23EC6E40
memory-013 DIMM 0/7 ok Samsung 65536MB DDR4 SDRAM DIMM 2102-23EC6DEB memory-013 DIMM 0/7 ok Samsung 65536MB DDR4 SDRAM DIMM 2102-23EC6DEB
memory-014 DIMM 0/4 ok Samsung 65536MB DDR4 SDRAM DIMM 2102-23EC6D6F memory-014 DIMM 0/4 ok Samsung 65536MB DDR4 SDRAM DIMM 2102-23EC6D6F
memory-015 DIMM 0/5 ok Samsung 65536MB DDR4 SDRAM DIMM 2102-23EC6C7E memory-015 DIMM 0/5 ok Samsung 65536MB DDR4 SDRAM DIMM 2102-23EC6C7E
…… ……

49 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
ZS9-2 – ILOM configuration requirements

ZS9-2 relies on certain non-default ILOM settings to operate as designed


In the event of ILOM factory reset or motherboard replacement, the following must be configured
before returning the system to service
• Auto Power-On
• Automatically powers on host after service processor cold boot
• set /SP/policy HOST_AUTO_POWER_ON=“enabled”
• Password Policy
• ILOM enforces password policy for password changes over IPMI interfaces
• AK synchronization of root passwords between host and SP could fail if host password does not comply with
ILOM policy
• Set ILOM password policy to minimum:
• set /SP/preferences/password_policy policy=1.
• Note: the ‘.’ after ‘1’ is significant

50 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
ZS9-2 – ILOM configuration requirements

Unlike prior x86 server platforms, X9 series servers set default serial interface baud rate to
115200/8N1
To ensure compatibility with customer environments, as well as legible host console display, the
following serial interface settings are required for ZS9-2:
• HOST automatic baud rate
• Automatically negotiates baud rate for HOST console (/SP/console)
• set /SP/serial/host pendingautobaud=enabled
• set /SP/serial/host commitpending=true
• External serial port baud rate
• Sets baud rate for external serial port SER MGT
• set /SP/serial/external pendingspeed=9600
• Set /SP/serial/external commitpending=true

Note: any service action for factory reset ILOM or MB replacement requires 115200/8N1 serial
capability and requires 9600/8N1 after committing this change

51 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
ZS9-2 – BIOS configuration requirements

In the event of BIOS factory reset or motherboard replacement, the following options must be
configured:
• Set Persistent Boot support “Enabled”:

• Disable UEFI Driver for all PCIe slots (IO/Add In Cards/Slot n)


• Except Slot 100 and Slot 101: NVMe boot devices must have UEFI driver enabled

52 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
ZS9-2 – other motherboard replacement considerations

Do not detach heatsinks from the processors


• Unless also replacing processors
Label PSUs before removing from chassis
• PS0 is a quorum device that includes system identity information
• FRUID records updated automatically if PS0 returned to the appropriate bay
• Otherwise, identity records must be reprogrammed using ILOM service mode
• Only a concern in the event of MB or DBP replacement
• Otherwise PSUs can be swapped at any time with no effect to system identity

53 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
Oracle ZFS Storage ZS9-2 –
Racked Systems

54 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
ZS9-2 – Racked Systems overview
1. Build ZFS 2. Select System Type 3. Select Client Networking:
Storage (High-End, Mid-Range) (100GbE, 40GbE, 10/25GbE,
Appliance 32Gb FC,10GBASE-T)

4. Select Disk Enclosure


(Type and qty DE3s for Base Rack)
6. Optional—
Expansion Rack(s)
5. Base Rack
Storage Systems
DE3 Options: Configuration
DE3-24C only with
All 24C, All 24P, or Mix 24C/24P Complete
• Allow any combination RZ/LZ up to 4x SSD install options of:
• 24P All Flash Pools supported 2, 4, 5, 6, 8, or 10
• ZS9-2RS HE Max Base Rack Config: DE3-24Cs
•9x DE3-24Cs, 18x DE3-24Ps
• ZS9-2RS MR Max Base Rack Config:
•9x DE3-24Cs, 18x DE3-24Ps
55 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
ZS9-2 – Racked Systems overview

ZS9-2RS Overview
• Provides customer with pre-racked ZFS Storage Appliance configurations
• Addresses customer requirements for fully assembled/tested/integrated racked
storage appliance
• Modular building block capability allows for flexible configurations
• ZS9-2RS offers new & more flexible configuration options over previous Racked
Systems
• Utilizes the ZS9-2 (High-End) and ZS9-2 (Mid-Range) controllers
• Allows for optimum performance multi-Rack Storage Expansion
• Supports optional top-of-rack switches for 100GbE networking
• For Exadata X8M/X9M RMAN backup applications
• Hardware features based on ZS9-2 HE/MR definition and on-line
Configurator options
• Software features based on Appliance Kit: OS8.8.36+IDR;
• ILOM: SW1.2.0, 5.0.2.23-r141538

56 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
ZS9-2 – Racked Systems overview – backend

ZS9-2RS backend configurations


• Fixed number of SAS-3 (Thebe3) HBAs—minimizes complexity and targets
specific configurations
• 4x SAS-3 HBAs for ZS9-2RS HE
• 2x SAS-3 HBAs for ZS9-2RS MR
• SAS-3 Active Optical Cables (AOC)—allows for easier cabling and more flexible
configurations
• SAS-3 AOC Cable lengths supported: 3, 6, and 20 meters
• Required for all ZS9-2RS Controller to DE3 disk shelf attach
• Allows for easier cable routing through Cable Management Arm (CMA)
• Allows for attaching across two or more Racks
• Supports special case DE3 to DE3 attach
• 6m and 20m AOC cabling allowed for DE3 to DE3 attach across two or more Racks
• Only one pair of 6m or 20m AOC cables allowed within a chain

57 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
ZS9-2 – Racked Systems overview – enclosures

ZS9-2RS enclosure and chain configurations


• ZS9-2RS Base Rack install options—
• DE3-24C: 1, 2, 4, 6, 8, 9; DE3-24P: 1, 2, 4, 6, 8, 10, 12, 14, 16, 18
• Accommodates more chains in the base rack configurations
• ZS9-2RS HE allows for 2, 4, and 8 Chain DE3-24C Base Rack Configurations
• Meets highest performance needs for most popular customer configurations
• Build bottom to top Base Rack Configurations for ZS9-2RS HE (No Gaps in Base Rack)
• ZS9-2RS MR allows for 2, 3, and 4 Chain DE3-24C Base Rack Configurations
• Build bottom to top Base Rack Configurations for ZS9-2RS MR (Gaps allowed between DE3s for
easier Field Upgrades similar to Expansion Rack)
• Support any combination of flash accelerators in DE3 Disk Trays
• 0, 1, 2, 3, or 4x read cache or write log SSD options for 20x HDD Disk Trays
• No DE3-24P Expansion Rack support—AFP current & future needs are met
with DE3-24P Base Rack
• ZS9-2RS is not EMC Compliant in Taiwan, and EAEU (Russia) and will not be
shipped there

58 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
ZS9-2 – Racked Systems overview – PDUs
PDU Plug & Data Center Receptacle Selection for Racked Systems (Ref—PCD Section 3.11)
MKTG TYPE ATO PLUGS RECEPTACLES PLUG/REC CABLE
P/N P/N SPEC (min) SPEC (min)
Rack & PDU
6442A 15 KVA, 7056072 Hubbell HBL2621 Hubbell HBL2623 NEMA L6-30 10 AWG Cable Plugs
Low Marinco 306P Marinco 306C 30 A, 3-PIN, SOOW
Voltage, 250 VAC, (Top Exit)
1-Phase 1 Phase

6440A 15 KVA, 7056075 Mennekes ME460P9W Mennekes ME460R9W IEC 309, IP67 6 AWG
Low Hubbell C460P9W Hubbell C460R9W 60 A, 4-PIN SOOW
Voltage, Walther Electric 269409 Walther Electric 250 VAC,
3-Phase Leviton 460P9W 369409 Leviton 3-Phase
460R9W
PDU &
6441A 15 KVA,
High
7056076 Hubbell C530P6S
PC Electric GMBH:
Hubbell C530C6S
PC Electric GMBH:
IEC 309, IP44
30/32 A, 5-
10 AWG
(9 AWG / 6 mm²)
Cable Plug
7600785 Voltage, 0259-6 2259-6 PIN LAPP OLFLEX®
(Korea only) 3-Phase 200/346 POWER IX
@30A
240/415
@30A
220/380
@32A Receptacle Cable
240/415
@32A Extension to
3-Phase Customer Data
6443A 22 KVA, 7056073 Hubbell C332P6S Hubbell C332C6S IEC 309, IP44 10 AWG
High Walther 231306 Walther 331306 32 A, 3-PIN (9 AWG / 6 mm²)
Center Main AC
7600788 Voltage, Mennekes 160 Mennekes 122 250 VAC, LAPP OLFLEX® Source
(Korea only) 1-Phase 1-Phase POWER IX

59 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
ZS9-2 – Racked Systems – HE base rack chain configurations
DE3-24C Base Rack Configurations (Multi-Chain DE3 Rack Options)—
Base Rack: 9X DE3-24C EBODs Base Rack: 9X DE3-24C EBODs Base Rack: 9X DE3-24C EBODs
Base Rack: 1X DE3-24C EBODs ZS9-2RS (HE) RACK, 4X THEBE3 (2 CHAIN) ZS9-2RS (HE) RACK, 4X THEBE3 (4 CHAIN) ZS9-2RS (HE) RACK, 4X THEBE3 (8 CHAIN)
ZS9-2RS (HE) RACK, 4X THEBE3 (1 CHAIN) 3x Expansion Rack 2x Expansion Rack NO Expansion Rack
4x Expansion Rack

9 9 9

ru 39 ru 39
ru 39
ru 39 8
8 8

ru 35 ru 35 ru 35
ru 35 7
7 7

ru 31 ru 31 ru 31
ru 31 6
6 6

ru 27 ru 27 ru 27
ru 27
5 5 5

ru 23 ru 23
ru 23 ru 23

CTLR1 CTLR1 CTLR1


CTLR 1
ru 20 ru 20 ru 20
ru 20

CTLR0 CTLR0 CTLR0


CTLR 0
ru 17 ru 17 ru 17
ru 17
4 4 4

ru 13 ru 13 ru 13 ru 13
3 3 3

ru 09 ru 09 ru 09 ru 09
2 2 2

ru 05 ru 05 ru 05 ru 05
1 1 1 1

ru 01 ru 01 ru 01 ru 01

Note1—Each color represents a different Chain Note1—Each color represents a different Chain Note1—Each color represents a different Chain Note1—Each color represents a different Chain

60 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
ZS9-2 – Racked Systems – HE base rack chain configurations
DE3-24P Base Rack Configurations (DE3-24P Only Options)—
Base Rack: 1X DE3-24P EBODs Base Rack: 8X DE3-24P EBODs Base Rack: 8X DE3-24P EBODs Base Rack: 18X DE3-24P EBODs
ZS9-2RS (HE) RACK, 4X THEBE3 (1 CHAIN) ZS9-2RS (HE) RACK, 4X THEBE3 (8 CHAIN) ZS9-2RS (HE) RACK, 4X THEBE3 (8 CHAIN)
ZS9-2RS (HE) RACK, 4X THEBE3 (4 CHAIN)
4x Expansion Rack NO Expansion Rack NO Expansion Rack
2x Expansion Rack

ru 42 18 ru 42
ru 42 ru42
ru 41 ru 41
ru 41 ru 41
17
ru 39 ru 39
ru 39 ru 39
16
ru 37 ru 37
ru 37 ru 37
15
ru 35 ru 35
ru 35 ru 35
14
ru 33 ru 33
ru 33 ru 33
13
ru 31 ru 31
ru 31 ru31
12

ru 29 ru 29 ru 29 ru 29
11

ru 27 ru 27 ru 27 ru 27
10

ru 25 ru 25 ru 25 ru 25
9

ru 23 ru23 ru 23 ru 23

CTLR1 CTLR1 CTLR1 CTLR1

ru 20 ru 20 ru 20 ru 20

CTLR0 CTLR0 CTLR0 CTLR0

ru 17 ru 17 ru 17 ru 17
8 8 8
ru 15 ru 15 ru 15 ru 15
7 7 7
ru 13 ru 13 ru 13 ru 13
6 6 6
ru 11 ru 11 ru 11 ru 11
5 5
5
ru 09 ru 09 ru 09 ru 09
4
4 4
ru 07 ru 07
ru 07 ru 07
3
3 3
ru 05 ru 05
ru 05 ru 05
2
2 2
ru 03 ru 03
ru 03 ru 03
1 1
1 1
ru01
ru 01 ru 01 ru 01

Note1—Each color represents a different Chain Note1—Each color represents a different Chain Note1—Each color represents a different Chain Note1—Each color represents a different Chain

61 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
ZS9-2 – Racked Systems – HE base rack chain configurations
DE3-24P / DE3-24C Base Rack Configurations (Mix DE3 & Full Rack Options)—
Base Rack: 2X DE3-24C + 2X DE3-24P EBODs Base Rack: 2X DE3-24C + 14X DE3-24P EBODs Base Rack: 4X DE3-24C + 10X DE3-24P EBODs Base Rack: 6X DE3-24C + 6X DE3-24P EBODs
ZS9-2RS (HE) RACK, 4X THEBE3 (2 CHAIN) ZS9-2RS (HE) RACK, 4X THEBE3 (8 CHAIN) ZS9-2RS (HE) RACK, 4X THEBE3 (6 CHAIN) ZS9-2RS (HE) RACK, 4X THEBE3 (4 CHAIN)
3x Expansion Rack NO Expansion Rack 1x Expansion Rack 2x Expansion Rack

ru42 14 ru 42 10 ru42 6 ru 42
ru41 ru 41 ru41 ru 41
13 9 5
ru 39 ru 39 ru 39 ru 39
12 8 4
ru 37 ru 37 ru 37 ru 37
11 7 3
ru35 ru 35 ru35 ru 35
10 6 2
ru33 ru 33 ru33 ru 33
9 5 1
ru31 ru 31 ru31 ru 31
8 4 6
ru 29 ru 29 ru 29 ru 29
7 3
ru 27 ru 27 ru 27 ru 27
6 2
5
ru25 ru 25 ru25 ru 25
5 1
ru23 ru 23 ru23 ru 23

CTLR1 CTLR1 CTLR1


CTLR1

ru 20 ru 20 ru 20
ru 20

CTLR0 CTLR0 CTLR0


CTLR0
ru 17 ru 17 ru 17
ru 17
4 4
4
ru15 ru 15 ru15
ru 15
3
ru13 ru 13 ru13
ru 13
2 2 3
3
ru11 ru 11 ru11
ru 11
1 1
ru09 ru 09 ru09
ru 09
2 2 2
2
ru07 ru 07 ru07
ru 07

ru05 ru 05 ru05
ru 05
1 1 1
1
ru03 ru03
ru 03 ru 03

ru01 ru01
ru 01 ru 01

Note1—Each color represents a different Chain Note1—Each color represents a different Chain Note1—Each color represents a different Chain Note1—Each color represents a different Chain

62 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
ZS9-2 – Racked Systems – MR base rack chain configurations
DE3-24C Base Rack Configurations (Multi-Chain DE3 Rack Options)—
Base Rack: 1X DE3-24C EBOD Base Rack: 2X DE3-24C EBOD Base Rack: 6X DE3-24C EBOD Base Rack: 4X DE3-24C EBOD
ZS9-2RS (MR) RACK, 2X THEBE3 (1 CHAIN) ZS9-2RS (MR) RACK, 2X THEBE3 (2 CHAIN) ZS9-2RS (MR) RACK, 2X THEBE3 (3 CHAIN) ZS9-2RS (MR) RACK, 2X THEBE3 (4 CHAIN)
2x Expansion Rack 1x Expansion Rack 1x Expansion Rack NO Expansion Rack

ru 39 ru 39 ru 39 ru 39

ru 35 ru 35 ru 35 ru 35
4

ru 31 ru 31 ru 31 ru 31
6

ru 27 ru 27 ru 27 ru 27
2 5 3

ru 23 ru 23 ru 23 ru 23

CTLR1 CTLR1
CTLR1 CTLR1
ru 20 ru 20 ru 20 ru 20

CTLR0 CTLR0
CTLR0 CTLR0
ru 17 ru 17 ru 17 ru 17
4

ru 13 ru 13 ru 13 ru 13
3 2

ru 09 ru 09 ru 09 ru 09
2

ru 05 ru 05 ru 05 ru 05
1 1 1 1

ru 01 ru 01 ru 01 ru 01

Note1—Each color represents a different Chain Note1—Each color represents a different Chain Note1—Each color represents a different Chain Note1—Each color represents a different Chain

63 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
ZS9-2 – Racked Systems – 100Gb switching option

ZS9-2 Racked Systems continue to support top-of-rack switching option


• Facilitate Exadata X8M/X9M RMAN backup applications
• Same Cisco 9336C-FX2 switches installed to RU locations 41 and 42
• One or Two CX5 100Gb NICs per controller
• One connection from each controller to each switch (4 total)
• In 2x NIC configuration, the additional ports can be used general purpose in customer network
• Field cabling upgrade for additional two connections supported but no benefit expected for RMAN workloads
• 25Gb connections to Exadata DB nodes via MPO to 4x LC optical splitter cables
• 5m copper splitter cables can be used if in close proximity
Configurator rules when adding top-of-rack switch option:
• Minimum one CX5-100Gb NIC per controller, maximum two
• Auto-add two 3m QSFP28 twin-ax copper cables per controller (whether 1 or 2 NICs)

100Gb connections to Exadata leaf switch supported only directly from 100Gb NIC ports
• ZS9-2 can support connection to maximum two Exadata systems using this method
64 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
ZS9-2 – Racked Systems – 100Gb switching option – single NIC

65 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
ZS9-2 – Racked Systems – 100Gb switching option – dual NIC

66 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
ZS9-2 – Racked Systems – 100Gb switching option – dual NIC

67 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
Oracle ZFS Storage ZS9-2 –
Reference Materials

68 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
ZS9-2 – Reference Materials
Fishworks Engineering
• Includes all ZFSSA product PCDs including ZS9-2 and ZS9-2RS and other useful ZFSSA references
• https://confluence.oraclecorp.com/confluence/display/SVOS/ZFSSA+Fishworks+Engineering
ZFSSA Field Service Manuals
• Includes complete ZS product field service docs as well as MB replacement docs
• https://confluence.oraclecorp.com/confluence/display/SVOS/Field+Service+Manual
• ZS9-2 MB replacement procedure in Field Service Manual only. No separate document.
Oracle Power Calculators
• Power calculator tools for server and storage products – X9-2/L and ZS9-2 not yet posted
• https://www.oracle.com/it-infrastructure/power-calculators/
Oracle ZFS Storage Appliance documentation library OS8.8.x
• Doc updates with ZS9-2 coming soon. Stay tuned…
• https://docs.oracle.com/en/storage/zfs-storage/zfs-appliance/os8-8-x/
X9-2L Service Manual
• Not available at this writing: internet search > Oracle X9-2L service manual
Oracle Systems Handbook
• https://support.oracle.com/handbook_private/Systems/index.html
69 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
ZS9-2 – Reference Materials – KM Articles

Oracle ZFS Storage Appliance ZS9-2 - How to Replace the NVME System Boot Drives (Doc ID 2784034.1)
• Details how to use hidden NVMe drive replacement workflow
How To Replace An Oracle ZS9-2 Storage Appliance Motherboard Assembly (Doc ID 2784383.1)
• Boils down MB replacement procedure into a MOS note
Oracle ZFS Storage Appliance ZS9-2 Memory DIMM actionable FMA Events (Doc ID 2783967.1)
• No real content yet – will develop as needed
Oracle ZFS Storage Appliance ZS9-2 CPU actionable FMA Events (Doc ID 2783954.1)
• No real content yet – will develop as needed
Oracle ZFS Storage Appliance: How to Check the SP / ILOM / BIOS Revision Level (Doc ID 1174698.1)

New KM article coming for 100Gb ToR switch configuration


• Like: Set Up and Configure Exadata X8M Backup with ZFS Storage ZS7-2 (Doc ID 2635423.1)

70 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
ZS9-2 – Reference Materials

Datasheets and white papers yet to be published…. Internet search engines are your friend

71 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
Q&A
christopher.wells@oracle.com

72 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal September 1, 2021
Thank you

73 Copyright © 2021, Oracle and/or its affiliates | Confidential: Internal

You might also like