Download as pdf or txt
Download as pdf or txt
You are on page 1of 29

EDSFF: It’s More

Than Just Storage


A Conversation with Very Opinionated Experts

SNIA SDC 2020

1 | ©2020 Storage Networking Association. All Rights Reserved.


SNIA’s SFF TA TWG

▪ 70+ member companies


▪ 150+ publicly available active specifications, information documents, and
reference guides covering a wide range of topics:
▪ Cables & connectors
▪ Form factor sizes and housing dimensions
▪ Management interfaces
▪ Transceiver interfaces
▪ Electrical interfaces
▪ Enables technology vendors to procure compatible, multi-sourced products
and solutions
▪ Consider becoming a member today!

2 | ©2020 Storage Networking Association. All Rights Reserved.


Overview:

▪ The industry needed a new form factor to meet projected workload demands
on capacity, thermal loading, electrical loading
▪ The SNIA Small Form Factor Technology Affiliate Technical Working Group
(SFF TA TWG) met this need with the EDSFF family of specifications
▪ The SFF TA TWG maintains this and many other families (such as defining the
2.5” and 3.5” form factors we’re all used to)
▪ EDSFF contributors from System Makers, SSD Makers, and Connector
Makers will share what they believe is are the important “Things to Know”
▪ A co-chair from the SFF TA TWG will moderate

3 | ©2020 Storage Networking Association. All Rights Reserved.


Very Opinionated Experts
Form Factor Guy: Anthony Constantine, Intel
SSD Guy: John Geldman, Kioxia
Connector & Cable Guy: David Herring, TE Connectivity
Server Guy: Bill Lynn, Dell Technologies
Moderator: Alex Haser, Molex

4 | ©2020 Storage Networking Association. All Rights Reserved.


Anthony Constantine is a Principal Engineer
at Intel, where he focuses primarily on driving
innovation to memory and storage from
mobile to datacenter. He is active in the
standards area, contributing to Open NAND
Flash Interface, EDSFF, SFF, PCI-SIG,
NVMe, and JEDEC. Anthony has over 20
years of experience in the technology industry
with an expertise in memory, physical
Anthony Constantine interfaces, low power technologies, and form
factors. He earned a BS in Electrical
Principal Engineer, Intel
Engineering from UC Davis.

5 | ©2020 Storage Networking Association. All Rights Reserved.


• Member Board of Directors, NVM Express
• Currently an active contributor to the following standards
organizations:

• NVM Express, INCITS T10, INCITS T13, JEDEC, OCP, PCI-


SIG, SATA IO, SNIA, IEEE SISWG

• In addition, John’s team members are also active in CXL,


DMTF, TCG
• Corporate leadership responsibility for standards for multi-
billion dollar storage vendors since 2011

John Geldman • Involved in storage standards since 1992, with an early


introduction to standards including the transition from X3T9 to
Director, SSD Industry ATA, SCSI, PCMCIA, and CardBus
Standards at KIOXIA • An active FMS CAB member for at least 10 years

6 | ©2020 Storage Networking Association. All Rights Reserved.


David is a Technologist on the System
Architecture Team at TE Connectivity within the
Data and Devices business unit. His
responsibilities include serving customers within
the hyper-scale and enterprise spaces developing
storage and compute solutions and aligning
product roadmaps for developing technologies.
David Herring David is involved in many industry related
Technologist - System Architecture activities including OCP, Gen-Z, PCISIG, CXL,
TE Connectivity EDSFF, JEDEC, and SNIA’s SFF TA TWG.

7 | ©2020 Storage Networking Association. All Rights Reserved.


Bill Lynn is a Distinguished Engineer in Dell’s
Server Architecture Pathfinding group. Bill has
more than 30 years’ experience architecting
and developing storage subsystems. Bill was
the original author of the SFF-8639 U.2
connector specification and is one of the
current editors of the SFF-TA-1008 EDSFF E3
Bill Lynn device specification. Bill is also Dell’s
Distinguished Engineer – Server Architecture representative to the NVMe Board of Directors.
Dell Technologies

8 | ©2020 Storage Networking Association. All Rights Reserved.


Click to edit Master title style

From a Form Factor Guy’s Perspective


Presented by: Anthony Constantine, Intel

9 | ©2020 Storage Networking Association. All Rights Reserved.


Data Center SSDs: Previous and Current Options
AIC / CEM - Generic M.2 – Consumer 2.5in Form Factor

Good: High-performance, general Good: Small and Modular Good: Hot-plug, Storage features
compatibility
Bad: need PCIe® AIC slots for Bad: Low capacity, no hot-plug Bad: Mechanical design descended
other devices, limited hot-plug from HDD
Ugly: consumes lots of space Ugly: limited power and thermal Ugly: Blocks airflow to the hottest
scaling for data center use components in server

10 | ©2020 Storage Networking Association. All Rights Reserved.


What is EDSFF?

• Enterprise and Data Center SSD Form


Factor
• Improved thermals, power, and scalability
• High-speed common connector, pinout –
scalable to faster speed PCIe
• Integrated serviceability, hot-plug support
• Built in LEDs, carrier-less design
• Customizable latch for toolless
serviceability

11 | ©2020 Storage Networking Association. All Rights Reserved.


EDSFF: Always Evolving
SFF-TA-1009 Spec changes
(in progress)
Q2 2018:
pin/signal spec Rev 2.0, E1.S 1.1,
errata E3 spec changes
(In progress)
Q4 2017
EDSFF hands off specs to SNIA Q3 2019
SFF-TA E1.S Rev 1.3a to
Q1 2017 add x8 support
EDSFF group formed

Q3 2017 Q1 2018 Q1 2020


Intel launches “ruler” SSD at FMS, SFF publishes 1.0 specs for SFF-TA-1006 E1.S 1.4 New Project:
and intention to contribute to add 15mm SFF-TA-1023: Device
(E1.S), SFF-TA-1007 (E1.L), SFF-TA-1008
EDSFF Form Factor Thermal
(E3) Q2 2019 Requirements
E1.S Rev 1.2 to add
support for 9.5mm and
SFF-TA-1009 1.0 published
25mm thick enclosures
(pin/signal spec)

All specifications: https://www.snia.org/technology-communities/sff/specifications


12 | ©2020 Storage Networking Association. All Rights Reserved.
Commonality
E1.S
▪ The form factors are very
different mechanically
▪ But they all maintain similar 5.9mm 9.5mm 15mm 25mm
commonality:
▪ Card edge: Same E1.L 9.5mm
▪ Pinout: Same
▪ Pin functionality: Same E1.L 18mm
▪ Electrical specs: Same
▪ This makes it easier to
achieve design reuse
moving across form factors E3 v2

13 | ©2020 Storage Networking Association. All Rights Reserved.


Click to edit Master title style

From an SSD Guy’s Perspective


Presented by: John Geldman, Kioxia

14 | ©2020 Storage Networking Association. All Rights Reserved.


EDSFF is the upcoming form factor family of choice for SSDs
▪ Support of 112 GHz w/ PAM4 was the design
▪ The SFF-TA-1002 connector was designed from
space
the ground up for signal integrity challenges
beyond PCI Express ® 32 GHz NRZ (Gen 5) and
PAM4 (Gen 6)

▪ The duo of more stressful PCI Express


▪ The EDSFF form factors were designed from the interfaces and NAND technology improvements
shift the functional requirements for SSDs (we
ground up for higher thermal loads (with defined designed for up to 80 Watts)
air flows and methodologies)

▪ While SSD functionality is a driver for early


▪ Both of these issues are not unique to the PCIe
adoption of EDSFF, SSDs will complete for
transport or to the function of SSDs EDSFF server slots with SCM, HDD, Accelerator,
and Networking functions

15 | ©2020 Storage Networking Association. All Rights Reserved.


EDSFF: It’s a form factor family affair

▪ News Flash: There continues


to be demand for all of the ▪ M.2 storage and compute server
EDSFF form factors for SSDs use cases migrate well to E1.S to
realize better performance and
higher capacity per drive.
There is not a one winner,
▪ 2.5” SSD use cases can migrate
all the form factors have to E3.S in servers for better
significant use cases performance
▪ E1.L and E3.L are creating new
use cases (higher capacities,
SCM, higher power accelerators)

16 | ©2020 Storage Networking Association. All Rights Reserved.


EDSFF: Additional standardization work is in progress

▪ There is an E3 reboot in process that is a non-compatible change to the 2018


SFF-TA-1008
▪ The E3 reboot allows slot compatibility with OCP-NIC functions
▪ The E3 reboot supports use of targeted E1.S PCBs in E3.S slots
▪ There are standardization efforts underway to migrate HDDs into systems that
support PCIe based EDSFF form factors (OCP, NVMe)
▪ E3 Form Factors are an obvious spin on 2.5” HDDs
▪ PCI Express compatible storage transports are moving to support HDDs in
multiple standards organizations

17 | ©2020 Storage Networking Association. All Rights Reserved.


Click to edit Master title style

From a Connector & Cable Guy’s Perspective


Presented by: David Herring, TE Connectivity

18 | ©2020 Storage Networking Association. All Rights Reserved.


Interoperable Form Factors Fuel the Industry
OCP NIC 3.0
Connector & Cable Ecosystem EDSFF
• SFF-TA-1002 : Connectors • SFF-TA-1006 : E1.S
• REF-TA-1012 : Pinout Reference • SFF-TA-1007 : E1.L
• SFF-TA-1017 : Vertical Connector Test Board • SFF-TA-1008 : E3
• SFF-TA-1018 : Right Angle Connector Test Board • SFF-TA-1009 : PCIe Pin & Signaling Definition
• SFF-TA-1019 : Straddle Mount Connector Test Board
• SFF-TA-1020 : Cable Receptacles & Plugs
PECFF
• SFF-TA-1021 : Form Factor Definition
• SFF-TA-1022 : Thermal Reporting

19 | ©2020 Storage Networking Association. All Rights Reserved.


Ready to Connect the Future
• Support the future to 112G performance
enabled technologies

• Careful design considerations balancing


electrical performance for PAM-4 signaling
• Insertion Loss
• Return Loss
• Cross-Talk SFF-TA-1002

• Repeatable performance measurements

SFF-TA-1017 SFF-TA-1002
20 | ©2020 Storage Networking Association. All Rights Reserved.
Systems & Devices Meet Through Interconnect
Systems Devices
E1.L

Connector/Cable
E1.S

E3

Common Interfaces Enabling a Fruitful Ecosystem and Simplifying New Technology Intercepts

21 | ©2020 Storage Networking Association. All Rights Reserved.


Click to edit Master title style

From a Server Guy’s Perspective


Presented by: Bill Lynn, Dell

22 | ©2020 Storage Networking Association. All Rights Reserved.


EDSFF E3 for Dummies
Support for a
▪ E3 is a family of four form factors with a common 76mm height x4, x8, or x16 PCIe
connection

▪ E3.S
▪ 76mm x 112.75mm x 7.5mm
▪ Target to support from 20W to 25W
▪ Optimized for primary NAND storage in Servers
▪ E3.S, 2x
▪ 76mm x 112.75mm x 16.8mm
▪ Target to support from 35W to 40W
▪ Support for higher power devices like CXL based SCM
▪ E3.L
▪ 76mm x 142.2mm x 7.5mm
▪ Target to support up to 40W
▪ Support for higher capacity NAND storage
▪ E3.L, 2x
▪ 76mm x 142.2mm x 16.8mm
▪ Target to support up to 70W Note* - A thick device will fit into two thin slots
▪ Support for higher power devices like FPGAs and accelerators - A short device will fit into a long slot

23 | ©2020 Storage Networking Association. All Rights Reserved.


Form Factor Design Advantages
E3 family of devices offers many system design advantages
• Support for a wide range of power profiles
– Current SFF-TA-1002 supports up to 70W maximum power delivery
– Higher power profiles allows for richer accelerator device types
– Future versions of E3 will support much higher power profiles
– Power and thermal limits will be defined by SFF-TA-1023 Thermal Specification for EDSFF Devices

• Support for a wide range of device types


– Supports host link widths up to x16 PCIe 5 and beyond
– 2x device thickness allows for the use of standard networking connectors (front facing I/O)
– Larger PCB surface area allows for larger NAND capacity points and richer device types

• Interoperable device form factors


– 1x and 2x devices are interchangeable
– Good balance of device size to infrastructure requirements
– Right sizing of power profiles to bandwidth capabilities
– Enable common devices (spares) between 1U and 2U chassis

24 | ©2020 Storage Networking Association. All Rights Reserved.


Potential E3 Chassis Configurations (1U)

Storage
Config

20 E3L 1x Storage Devices

High Power
Server Config

15 E3S 1x Storage Devices with an air channel above

Alternate SCM SCM


Device
Config Accelerator SCM

9 E3S 1x Storage Devices and 4 E3S 2x SCMs or accelerators

25 | ©2020 Storage Networking Association. All Rights Reserved.


Potential E3 Chassis Configurations (2U)

Storage
Config

44x E3L Devices

Alternate Device 16x E3.Thin

Accelerator

Accelerator
Accelerator

Accelerator
Config Storage
SCM

SCM

SCM
SCM

Devices
24x E3S Devices and 8x Short or Long
SCMs or accelerators

26 | ©2020 Storage Networking Association. All Rights Reserved.


E1 in E3 Case

Allows leverage of high volume


E1.S PCB designs in an E3.S case

Raise the E3 connector to 19.54mm so that the E1 PCB clears mechanical


interferences and allows centering of the alternate set of LEDs.
27 | ©2020 Storage Networking Association. All Rights Reserved.
Future Device Types

• Moving the E3 connector to 19.54mm


allows for the use of a 4C+ connector used
by the OCP 3 NIC

• Should allow leverage of an OCP 3 NIC into


an E3 case

• Also allows for higher power devices (3x


or 4x thickness)

Additional connector space could be used for a


4C+ or a future higher power connector tab

28 | ©2020 Storage Networking Association. All Rights Reserved.


Thank you

29 | ©2020 Storage Networking Association. All Rights Reserved.

You might also like