Professional Documents
Culture Documents
Server Masters: Workload Profiles: Technical Sales 1 Hour
Server Masters: Workload Profiles: Technical Sales 1 Hour
Technical Sales
1 hour
Learning objectives
Traditional
Traditional Comprehensive
Comprehensive
and
and emerging
emerging and
and enduring
enduring
workloads
workloads Modern IT infrastructure security
security
Server
Server
5 5© Copyright 2018 Dell Inc
How to build a
modern IT
infrastructure
1 ADAPT AND SCALE
to dynamic business needs
2 AUTOMATE
to sustain and grow
Hardware requirements
Memory
Workload characteristics
• Consolidation of workloads onto fewer physical machines
CPU
• High availability and recovery
• Improved CPU utilization
Storage
• Resource sharing
Network
Rack density
Hardware requirements
Memory
Workload characteristics
• Backbone of communications infrastructure
CPU
• VOIP, presence, email, and file collaboration
• Peaks at login and lunch
Storage
• Random activity spikes
Network
Rack density
Hardware requirements
Memory
Workload characteristics CPU
• The four Vs: Volume, variety, velocity, veracity
• Resource ratio important Storage
• Requires fine tuning and customization for environment
Network
Rack density
Hardware requirements
Memory
Workload characteristics
•
CPU (with GPU)
Consolidate and centralize client virtualization
• Sized for task workers, knowledge workers and power
users Storage
• Bootstorms
Network
Rack density
Hardware requirements
Memory
Workload characteristics
• Intensive calculations CPU
• Genomic sequencing, oil and gas, modeling, high
frequency trading
Storage
• Processing spread across multiple nodes
• GPU computational acceleration.
Network
Rack density
Extreme scale
Towers Racks Modular
infrastructure
*Based on units sold (tie). IDC Worldwide Quarterly Server Tracker, Q1-Q3, 2016.
17 17 © Copyright 2018 Dell Inc The bedrock of the modern data center
Server solutions for every workload
Ultra Performance
In memory DB
IO optimized
Super 2S,
compute optimized
N.A Virtualization
Dell EMC Unique: 2 x PROCs,
3TB of memory, 24 x 2.5”, NVMe SSDs
VDI
2 x DW GPUs – 16 x SSDs Ultra performance in memory DB
DW GPUs – 24 x SSDs
SQL
Compute & fast SSDs Ultra performance in memory DB
NVDIMM-N persistent memory
vSAN
Scale out storage tiering 1.8” Tiering NVDIMM-N persistent 25GbE Scale IO
to 3.5” memory, all flash NVME
Deep storage
Cloud
N.A Chipset
Providers
SATA
Capacity optimized 4 x 3.5”
HCI
24 x 1.8” SATA SSDs for (10 + 2) x 2.5” cost effective
cost effective performance SSDs with up to 8 x NVMe SSDs 25GbE Scale IO
HFT, HPC,
4 x NVMe, 165W Processors Performance optimized,
CPU direct 8 x NVMe, 205W processors DB
WebTech
Compute & fast SSDs Ultra performance, 14 x HDDs
Data
Scale out storage tiering 1.8” 16 x DIMMs, 14 x HDDs 25GbE Analytics
to 3.5”
MSFT
Maximize cost effective Maximize csot effective Exchange
storage: 16 x 3.5” HDs storage: 14 x 3.5” HDs
Cloud
N.A Chipset
Providers
SATA
Capacity optimized 4 x 3.5”
HCI
1.8” SATA SSDs for cost 10 x 2.5” cost effective SSDs with
effective performance up to 4 x NVMe SSDs 25GbE Scale IO
HFT, HPC,
125W Processors Performance optimized,
CPU direct 4 x NVMe, 135W processors DB
Deep storage
Web serving
Ultra performance with 2x Intel Scalable
2x Intel Broadwell Processors, 16 SSDs, 1 GPU
Print
Up to 512 GB memory with 16 DIMM slots
384GB memory, 12 DIMM slots and 4/8/16 HDDs/SSDs
Deep storage
MSFT
2x M.2 for Boot and 8 x 3.5” HDDs/SSDs
Maximize cost effective storage or 16 x 2.5” HDDs/SSDs Exchange
3.2TB SSD
2 x Intel Broadwell SQL server
M630 / M1000e Databases and BI
Universal BP/ *Intel Apache pass Analytics
900GB Sled Storage M640
High
Performance
No HDs with backplane EDR, Omni-path, 10/25/40 GbE
HPC cost optimized, M.2 boot, no backplane Direct Liquid Cooling
Computing
Private
12 x 3.5” 2 x NMVe per sled (in 2.5” chassis only) for cache performance
Cloud/HCI/
All flash storage + PERC portfolio, M.2 boot 10/25/40 GbE
SDS
Typical Configuration:
CPU: 2 x 6130 CPU (16 core) Configuration B
Memory: 192 GB min (or more*)
GPU: 4 4 GPU, Switch, Dual CPU
IO: InfiniBand, Omnipath, or Ethernet
PCI topology: 4-x16 to GPUs
2-x16 to rear I/O
*Total system memory should be greater than the capacity of the (typically) 4 GPUs
33 © Copyright 2018 Dell Inc
Molecular Dynamics (typical C4140 - K configuration)
Example application – HOOMD-Blue
Characteristics
This customer is typically interested in peer to peer GPU-GPU performance
Optimum for this customer would be Configuration K – it delivers high performance
NVLinkTM is a proprietary solution by NVIDIA that allows for direct GPU to GPU communication
NVLinkTM is 25Gbps versus PCIe at 8Gbps
NVLinkTM
Typical Configuration:
Topology Configuration K
CPU: 2 x 6130 CPU (16 core)
Memory: 192 GB min (or more) NVLinkTM
GPU: 4
IO: InfiniBand, Omnipath, or Ethernet
PCI topology: 4-x16 to GPUs
2-x16 to rear I/O
Typical Configuration:
CPU: 2 x 6130 CPU
Memory: 192 GB min
Configuration C
GPU: 4
IO: Ethernet, InfiniBand or Omnipath
PCI topology: 4X-x16 to GPUs (Post RTS)
2X-x16 to rear I/O
Typical Configuration
CPU: 2 x 6130 CPU (16 core) Configuration G
Memory: 192 GB min
GPU: 4 (Post RTS)
IO: Omnipath. InfiniBand or Ethernet
PCI topology: 4-x16 to GPUs 4 GPU, 2 Virtual switches, Dual CPU
2-x16 to rear I/O