Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

Intel Optane PMen and SSDs

Introduction
This presentation introduces Intel® Optane™ PMem, describes what it is, and discusses why it is important to
Lenovo. This course has been updated to reflect new workloads that have been qualified by Intel and software
vendors.

Intel® Optane™ SSDs - solid-state drives that use Intel® Optane™ PMem technology - are also introduced
here.

Objectives
This course introduces Intel® Optane™ Persistent Memory, abbreviated PMem. This technology is supported
in Lenovo servers with certain 2nd or 3rd generation Intel® Xeon® Scalable Family processors.

Upon completion of this course, you will be able to:

· Describe Intel® Optane™ PMem technology

· List the modes of operation of Intel® Optane™ PMem

· Describe Intel® Optane™ SSDs

· Explain the business value of Intel® Optane™ PMem and SSDs

The Storage-Memory Gap


The server environment has a number of gaps in the capacity/performance pyramid. Intel is innovating in the
memory and storage space to help reduce these gaps.

In the uppermost gap there is a real need to extend memory capacity. Intel® Optane™ Persistent Memory
increases the size of available memory and adds persistence. Persistence is important because if there is a
power loss, the data is protected and the memory reload time on restart is eliminated. There is also a
performance gain from persistence.

To fill the storage gap, Intel has introduced Intel® Optane™ SSDs. Optane-based SSDs are different than
NAND-based SSDs, and provide increased lOPs and high write performance and consistency. The need for
speed and writing/rewriting of data, like caching, logging, and journaling solutions, are the ideal scenarios for
the use of Intel® Optane™ SSDs.

The Intel® Solutions


The Intel strategy is to offer more storage at a faster speed and a reasonable price. In past decades, architects
and developers have been limited to two distinct places to keep data - in memory, or in nonvolatile storage.
There is a huge capacity/performance gap between these two. In order to have truly workload-optimized
systems, the gap needs to be filled so that data is less of a burden than an asset.

Intel® Optane™ Persistent Memory fills the memory capacity gap in the data center, and in the future will do the
same for client and commercial systems.
Intel® Optane™ SSDs fill the storage performance gap to bring much lower latencies and faster access to
data sets at the top of the storage layer.

Finally, and not discussed in this course, Intel® 3D NAND SSDs fill the cost performance gap in the capacity
tier, bringing massive density at low cost. These SSDs are deployed in the warm data tier, and will move into
the cold tier of the storage hierarchy currently serviced by HDDs.

Intel® Optane™ PMem


Let's start by looking at Intel® Optane™ PMem.

• OS Operations: Paging
Paging and context switching refer to operations used by operating systems. Paging swaps memory
pages to and from disk to make disk look like part of main memory. Disk is slow, of course, and the
paging process adds latency as well.

• OS Operations: Context Switching and Interrupts


Context switching causes the CPU to save the state of a process or task, then continue with another.
While this is a regular part of multi-tasking operating systems, increased levels of context switching will
slow down the system. Interrupts, as the name suggests, interrupt a process or task and cause the CPU
to handle the interrupt. Interrupts are a regular part of OS operation - peripherals generally use interrupts
to get CPU attention - but excessive interrupts will slow down the system.

• OS Operations: I/O, DMA I/O, and RDMA I/O


Regular I/O operations move data from I/O devices to or from memory through the CPU. This process
can be slow and inefficient.

DMA - Direct Memory Access - enables faster I/O operations by allowing peripheral devices to access
memory directly, rather than going through the CPU.

RDMA - Remote Direct Memory Access - allows peripheral devices to access memory on other hosts
through a high-speed network connection.

• Redefining the Memory and Storage Hierarchy


Intel© Optane™ PMem is a memory technology jointly developed by Intel and Micron, and is deployed
using standard form factor memory modules. <1> The memory is addressable at the byte level, and
delivers performance dose to that of DRAM. Intel® Optane™ PMem behaves like memory, even though
it has some storage-like characteristics.

Intel® Optane™ PMem technology expands server memory capadty beyond what has been possible
with traditional memory DIMMs, and costs less per gigabyte than standard memory. DRAM prices have
fallen in recent years, but Intel has committed to keeping Optane™ PMem pricing competitive with
DRAM. These modules can also add persistence capability to the memory tier, resulting in reduced
downtime due to faster reboots and application restoration times.

• Optane™ PMem Properties


Intel© Optane™ PMem is DDR4 slot compatible, so installing these modules is as simple as installing
standard server memory.
Data on these modules is encrypted using AES-256 encryption. Intel offers modules in 128,256, and 512
GB capacities, and they operate at 2666 megatransfers per second (MT/s) for 100 series modules, and
at 3200 MT/s for 200 Series modules. Using Intel© Optane™ PMem enables memory deployments up to
6 TB per CPU, with up to 4.5 TB being Optane PMem. It is important to note that servers will need to be
populated with a combination of both traditional DRAM and Intel® Optane™ PMem.

• Persistent Memory Operating Modes


PMem supports two operating modes, Memory Mode and App Direct Mode. It is also possible to use a
third mixed mode which is a combination of Memory Mode and App Direct Mode. The mode selected
determines which capabilities are active and available to applications and the OS. The operating modes
are discussed later.

• Memory Mode
The first mode we are going to discuss is Memory Mode. This mode provides affordable high capacity
memory, which is volatile like traditional DIMMs.

The advantage of memory mode is that it can provide large amounts of system memory at a lower total
cost than can be achieved by a DRAM-only solution. This allows increased density in virtualized
environments, where memory capacity is often a limitation.

PMem makes higher capacities affordable, because PMem is less costly than DRAM.

Applications and the operating system perceive a pool of volatile memory, as is the case in DRAM- only
systems. In this mode, no specific PMem programming is required in the applications, and data will
not be saved in the event of a power loss.

• Memory Mode
The first mode we are going to discuss is Memory Mode. This mode provides affordable high capacity
memory, which is volatile like traditional DIMMs.

The advantage of memory mode is that it can provide large amounts of system memory at a lower total
cost than can be achieved by a DRAM-only solution. This allows increased density in virtualized
environments, where memory capacity is often a limitation.

PMem makes higher capacities affordable, because PMem is less costly than DRAM.

Applications and the operating system perceive a pool of volatile memory, as is the case in DRAM- only
systems. In this mode, no specific PMem programming is required in the applications, and data will
not be saved in the event of a power loss.

• Memory Mode Operation


In Memory Mode, the system DRAM acts as cache for the PMem, and is not counted toward the total
memory available to the CPU. CPU-accessible memory <1> consists only of the PMem. Cache hits -
<2> accessing data that is already in the DRAM cache - are as fast as DRAM accesses. Cache misses,
when the system needs to retrieve data from PMem <3>, are slower, both because PMem is slower than
DRAM and because the system first needs to check cache and then access the PMem. Applications with
consistent data retrieval patterns that the memory controller can predict <4> will have a higher cache hit-
rate, and should see performance close to all-DRAM configurations. Workloads with highly-random small
data access patterns over a wide address range may see some performance difference versus DRAM
alone.
• App Direct Mode
The second mode of Intel® Optane™ PMem is called App Direct Mode. This mode provides large
capacity and affordable memory, and adds memory persistence which is a key feature of this
technology. Applications experience higher availability because of faster reboots, which allows increased
compliance with SLAs. In effect, this mode turns PMem into extremely fast storage, accessible over a
very fast memory bus. App Direct mode requires the application to be modified to make it PMem aware.
Some system memory (DRAM) is volatile, and some is non-volatile (PMem), and the application has to
know which locations are persistent and which are volatile.

• Benefits of App Direct Mode


There are three significant benefits enabled by App Direct mode.

First, application aligned data management gives applications direct access to memory, bypassing the
operating system and kernel. This consistently reduces latency, which results in faster insights from
data. This greatly increases capacity for inmemory databases like SAP HAN A.

Second, PMem also provides rapid recovery of applications because application data remains in memory
even after a power failure, and does not need to be loaded from storage. This reduces reboot and
recovery times from hours or minutes to just seconds.

Finally, storing data in Intel® Optane™ PMem lowers latency by allowing the CPU access to stored data
via the memory bus instead of across the I/O bus. This enables super fast storage solutions because the
response time on the memory bus is measured in nanoseconds while the response time over the I/O bus
is typically measured in milliseconds. Loading a block device driver into the OS effectively turns the
PMem into a very fast storage device. This is sometimes referred to as 'Storage over App Direct’.

In App Direct Mode, the DRAM and the Intel® Optane™ PMem both count toward the total platform
memory seen by the OS.

• Storage Over App Direct Mode


Adding a block driver - which makes PMem accessible in blocks or "sectors” rather than at the byte level
- turns Intel© Optane™ PMem into a storage device. Since this storage device is on the memory bus,
rather than on an I/O bus, it is extremely fast

Supported Widely on Lenovo TTiinkSystem Servers


Intel© Optane™ PMem is supported by second- generation and third generation Intel© Xeon® Scalable
Processors. Platinum and Gold CPU SKUs are required to enable this technology. (One Silver SKU also
supports PMem).

Lenovo supports Intel© Optane™ PMem on ThinkSystem servers configured with supporting processors.
This includes the V2 versions of several servers, which support the 3rd generation Intel Xeon Scalable
Family processors.

Rack Servers
SR570 SR590 SR630 SR630V2 SR650 SR650 V2 SR850 SR860 SR950
Dense Platforms
SD530 SD650
Flex Nodes
SN550 SN550V2 SN850
• PMem Use Cases - General
A number of use cases have been identified for Intel© Optane™ PMem. The areas where Intel®
Optane™ PMem will show the most benefit are highlighted in the categories above. Many use cases
deal with access over high-speed links, such as RDMA access, discussed earlier, and PMoF (PMem
over Fibre).

Specific use cases are shown on the next slide.

• Optane™ PMem Use Cases - Specifics


The previous slide mentioned several areas where Intel© Optane™ PMem may be an advantage.
Several specific use cases have been identified for Intel® Optane™ PMem: these are SAP HAN A,
systems that need more than 1.5 TB of memory, and WSSD (Windows Server Software Defined).

Intel® Optane™ PMem in App Direct mode allows large SAP HANA
<1> databases to be memory- resident, and also improves startup time after an outage. Servers memory
capacity is often limited by the number of permitted memory slots. Optane™ PMem modules are up to
<2> 512 GB in size, and can expand server memory beyond the DRAM limit WSSD is a software-based
<3> storage solution. Intel® Optane™ PMem allows the storage visible to a server to be accessed
through the memory bus, vastly improving performance.

• Supported Applications
The applications shown here support the use of Intel© Optane™ PMem, or, where they’re open source,
have been tested by Intel.

Note that AI, analytics, and databases feature heavily. Theses are all areas where large amounts of data
kept in memory can improve performance significantly. The infrastructure and storage workloads and
applications benefit from large quantities of memory running at close to DRAM speeds, and which can
be used as fast storage. Note also that operating systems are well represented.
• App Direct Mode Support
In this mode, as noted previously, the application must be PMem aware. That awareness, though it
involves programming changes, provides many benefits to the application environment Note that the
applications are those that benefit from large amounts of low-latency memory, typically databases and AI
environments.

NetApp MAX Data uses PMem as a data tier, and requires no changes on the part of applications that
use the storage.

• Storage over Direct Mode App


Storage over App Direct Mode is a subset of App Direct Mode which uses a block driver. That makes
PMem look like very fast storage, accessible in ‘sectors’ of 512 bytes or 4 kB rather than bytes. Once
again, note that the applications that have been qualified are those that benefit by large quantities of
high-speed storage - databases, artificial intelligence, and analytics. This use case illustrates the
flexibility of PMem.

• Memory Mode
Memory Mode requires no changes to the OS or the application. The slide shows applications which
have been evaluated by Intel, in most cases with ISV partners, for use in this mode. Even in this mode,
which turns PMem into non-persistent memory, the workloads are very similar to those shown in the
other modes. Note that the large quantity of memory in infrastructure and storage environments will help
relieve pressure on the (slower) disk drives by caching data. These environments are converged or
hyperconverged; external storage systems will implement their own cache.

• Key Takeaway
Here are the Intel® Optane™ PMem key takeaways to remember from this presentation.

Intel® Optane™ PMem provides high capacity, affordable memory, with persistence.

Module performance is comparable to DRAM but costs less per gigabyte than standard memory DIMMs.

Modules are available in 128, 256, and 512 GB capacities and are DDR4 pin compatible. Servers must
be populated with both Intel® Optane™ PMem and traditional DRAM. This technology enables up to 6
terabytes per CPU memory capacity.

There are multiple operational modes available to provide flexibility in feature deployment:

Memory Mode implements high capacity, non- persistent memory with DRAM caching. No software or
application changes are necessary.

App Direct Mode implements byte addressable, low latency memory with persistence similar to
storage. This mode requires persistent memory aware OS and application code.

Storage over App Mode implements a block driver to make the PMem look like storage accessible in
sectors [blocks].

Intel® Optane™ SSDs

• Intel® Optane™ SSDs


This section covers Intel® Optane™ SSDs. Note that these are the commercial-grade SSDs typically
found in servers, where all of the storage is PMem.
• Garbage Collection in NAND SSDs
The performance of every NAND SSD suffers from the overhead process known as garbage collection.
Available data space is divided into blocks which are often around 256 kB in size, and blocks are further
divided into pages, typically around 4 kB in size.

NAND media reads and writes in pages, and writes only to blank (erased) pages. The granularity for
erase operations is a block, and erasing a block is a slow process. For performance reasons, page
updates are typically written to a new unused block. As new data is written, old pages become stale.
Stale pages soon occupy significant space on an SSD, and blocks must be freed up to allow new writes
to proceed. Both sequential write and more typical random read/write scenarios follow the same rules.
When a block needs to be freed up, the garbage collection process is launched. Garbage collection is a
cumbersome 3-step process.

In the first step, current pages are copied to an empty block. The second step consists of cleaning up the
entire block by erasing both stale pages and the pages copied to the new block.

Finally, the newly emptied block is available for new writes, and the NAND program/erase cycle starts all
over again. This cumbersome process is why you'll see reputable companies (like Intel) reporting drive
performance numbers using “pre-conditioned" SSDs - meaning there is data already on the drive before
the test starts, versus a fresh-out-of-the-box drive where data can be written anywhere with no garbage
collection.

• Why Intel® Optane™ SSDs?


Intel Optane Technology is bit addressable unlike the transistor based technology and is therefore not
divided into blocks and pages as is the case with NAND SSD. The design eliminates the need for the
costly and inefficient garbage collection process, which improves overall performance. Optane™ SSDs
are capable of much greater performance than regular NAND SSDs - they are at least one order of
magnitude faster.

• Intel® Optane™ SSD Performance


This slide compares the performance of an Intel® Optane™ SSD with that of a premium Intel® NAND
SSD. An increasing amount of write pressure is applied to the SSD, and the read response time is
logged. Note that for the Optane™ SSD, the read response time is unaffected by write pressure up to
the 700 MB/s mark, and stays well below 20 us. The regular NAND SSD starts to suffer from the effects
of garbage collection fairly early in the test, as can be seen by the increasing read response times.
Though not shown here, the write response times for the NAND SSD would also have increased.

This consistent level of performance makes the Optane™ SSDs well suited for caching in high-
performance storage environments such as SDS and HCI.

• Caching with Optane™ SSDs


Lenovo has broadly qualified Intel® Optane™ SSDs to be the go to cache option in software defined
storage environments. Intel® Optane™ SSDs provide the low latency, high endurance, and high
efficiency needed from a cache drive.

Latency of Intel® Optane™ SSDs was discussed on the previous slide.

Drive writes per day (DWPD) is the number of times we can write to the drive per day and read it back
with no data loss. This is a drive lifetime number, not a 24 hour limitation. Optane™ SSD achieves 60
drive writes per day, or 20x better than most NAND SSDs on the market.
When the CPU and memory are not limiting the workload, lower storage latency, higher IOPS, and better
endurance allow each server to support more virtual machines. Additionally, you no longer need 10% of
your total capacity provisioned as cache. In many cases, one or two 375 GB Optane™ SSDs will be
sufficient. The bottom line is, you don’t need to purchase many gigabytes of cache, and you can support
more virtual machines per server with Intel® Optane™ SSDs.

• Business Value of Intel® Optane™ PMem and SSDs


The business value of Intel® Optane™ technology is improved support for in-memory databases, higher
VM counts in memory-bound environments, and efficient caching of data for HCI environments.

Course Summary
Having completed this course, you should be able to:

· Describe Intel© Optane™ PMem technology

· List the modes of operation of Intel® Optane™ PMem

· Describe Intel® Optane™ SSDs


• Explain the business value of Intel® Optane™ PMem and SSDs
Which statement describes Intel® Optane™ Persistent Memory?

Data survives a power cycle in all operating modes


It is faster and cheaper than DRAM
It is supported by all Intel Scalable Family CPUs
It is DDR4 slot compatible

Which statement describes system configurations using


Intel® Optane™ Persistent Memory?

It is faster and cheaper than DRAM


Both DRAM and PMem must be present in the system
It is DDR2 slot compatible
It is supported by all Intel Scalable Family CPUs
Which statement describes Intel® Optane™ PMem in App Direct Mode?

DRAM DIMMs act as cache for the persistent memory


Data will be preserved even if power fails
It does not need DRAM DIMMS to be installed in the system
It is supported by all Intel Scalable Family CPUs

Which statement describes Intel® Optane™ PMem in Memory Mode?

Data will be preserved even if power fails


It is supported by all Intel Scalable Family CPUs
DRAM DIMMs act as cache for the persistent memory
It does not need DRAM DIMMS to be installed in the system

Which statement describes Intel® Optane™ PMem in Memory Mode?

Data will be preserved even if power fails


It is supported by all Intel Scalable Family CPUs
DRAM DIMMs act as cache for the persistent memory
It does not need DRAM DIMMS to be installed in the system

What characterizes Intel® Optane™ PMem in App Direct Mode?

On-DIMM DRAM caches the PMem data


No OS drivers are required
Data will be lost if power fails
Applications and the OS require code changes to use PMem

What characterizes Intel® Optane™ SSDs?

Uses transistorless technology to store data


Internal DRAM speeds up garbage collection
Dense 3D NAND technology speeds up data access
Data will be lost if power fails
Which number specifies the endurance of Intel® Optane SSDs in DWPD?

60
10
100
30

How does Intel® Optane™ SSD break the storage bottleneck? Choose two [2].

Immediately moves data to the bulk storage


Temporarily holds coldest data
Moves data to the bulk storage when needed
Accelerates the data store
Temporarily holds hottest data

How can Intel® Optane™ SSDs help the performance of a VMware vSAN
system?

Intel® Optane™ SSDs increase the size of the memory pool


Intel® Optane™ SSDs can be used as a cache tier
Optane™ SSDs connect directly to multiple servers
Optane™ SSDs increase the bulk storage capacity

Which statement describes Intel® Optane™ SSD technology?

The latest in 3D NAND technology


A type of SAS/SATA technology
A transistor-based technology like NAND technology
A transistor-less design unlike 3D NAND technology

You might also like