Professional Documents
Culture Documents
FlashMemorySummit2012 v2
FlashMemorySummit2012 v2
Application
Kernel space
Filesystem
Hardware
Device
Flash Devices
Remapped at every write Heavily asymmetrical. Minimal difference Regular occurrence. If unmanaged can impact foreground Limited writes 100Ks to Millions 10s - 100s us Improves performance
Application
Kernel space
Filesystem
Hardware
Cut-thru architecture avoids traditional storage protocols Scales with multi-core Provide a large virtual address space HW/SW functional boundary defined as optimal for flash Traditional block access methods for compatibility New access methods, functionality and primitives natively supported by flash
Flash Memory Summit 2012 Santa Clara, CA
DATA TRANSFERS
PCIe
ioDrive
ioMemory
Data-Path Controller
Banks
Channels Wide
Fast Forward
Power of FTL no longer restricted by traditional block interfaces Opportunity for performance, simplicity and reliability improvements
Flash Memory Summit 2012 Santa Clara, CA 7
Native Access
Flash with direct I/O semantics Flash with memory semantics
Application
Application
Application
Application
Application
Open Source Extensions
Application
Open Source Extensions
Application
OS Block I/O
OS Block I/O
OS Block I/O Direct-access I/O API Family Memory access API Family
File System
Host
File System
Host
Block Layer
directFS
Remote
VSL VSL
Read/Write Read/Write
VSL
VSL
VSL
Flash Layer
Read/Write
Read/Write
Read/Write
Load/Store
Native Access
Flash with direct I/O semantics Flash with memory semantics
Application Sparse
Application
Application Addressing
Application
Application
Open Source Extensions
Application
Open Source Extensions
File System
Host
Block Layer
directFS
Remote
VSL
VSL
VSL
VSL
Read/Write Read/Write Read/Write Load/Store
Sparse Addressing
10
Cache Hit
Cache Miss
Backend store
12
Cache Miss
Applications
Block IO Layer Generalized ioMemory Layer Re-mapping AtomicOperation Page Read/Write Wear-Leveling
Block IO Layer
Block Erase
Sector Read/Write Flash Translation Layer Re-mapping Wear-Leveling Block Erase Page Write Page Read NAND Flash Memory Solid State NAND Flash Memory Disk ioMemory Controller
Disk Drive
September 9, 2012
14
15
X X
Range[0]
X X X X X X X X X
iov[0] Range[0] Range[1] iov[3] Range[n] iov[4]
Range[n]
16
X X
Range[0]
X X
Range[0]
X X X X X X X X X
iov[0] Range[0] Range[1] iov[3] Range[n] iov[4]
Range[n]
X X X
Range[1]
Range[m] Range[n]
17
43%
TRANSACTIONS/SEC INCREASE
2x
ENDURANCE INCREASE
Processor: Xeon X5472 @ 3.00GHz DRAM: 16GB DDR3 4x4GB DIMMs OS: Fedora 14 Linux kernel 2.6.35 Sysbench config: 1 million inserts in 8, 2-million-entry tables, using 16 threads
18
1U HP blade server with 16 GB RAM, 8 CPU cores - Intel(R) Xeon(R) CPU X5472 @ 3.00GHz with single 1.2 TB ioDrive2 mono Flash Memory Summit 2012 Santa Clara, CA
19
KV Store Key -> block mapping (overhead per key) block allocation
Block Read/Write
VSL Dynamic provisioning, Block allocation, persistence mechanisms, Logging, recovery etc.
21
VSL Dynamic provisioning, Block allocation, persistence mechanisms, Logging, recovery etc.
22
23
512B 60000 40000 20000 0 0 20 40 60 80 100 120 140 Threads 4KB 16KB 64KB
1U HP blade server with 16 GB RAM, 8 CPU cores - Intel(R) Xeon(R) CPU X5472 @ 3.00GHz with single 1.2 TB ioDrive2 mono Flash Memory Summit 2012 Santa Clara, CA
24
25
26
Thank you!
27
http://www.tomshardware.com/news/dram-memory-flash-nand-fusion-io,16254.html?utm_source=dlvr.it&utm_medium=twitter#xtor=RSS-181
28
http://www.fusionio.com/blog/auto-commit-memory-cutting-latency-by-eliminating-block-i/o/
Flash Memory Summit 2012 Santa Clara, CA 29