Professional Documents
Culture Documents
Lenovo de StudentGuide
Lenovo de StudentGuide
Lenovo de StudentGuide
All Flash
➢ Maximum performance
➢ Latency sensitive apps
PB 5-8
DE6000F
Hybrid
DE6000H
➢ Most cost effective
➢ Streaming data
DE4000F
➢ Mixed workloads PB 1-4
DE2000H
PERFORMANCE
100K 300K 1M
The Lenovo ThinkSystem DE Series includes all flash models and Hybrid
solution.
All flash offers max performance, and Hybrid offers most cost effective solution
for variety of workloads.
DE Target Workloads
For customers who need a high-performance, low-cost, reliable solution that is easy to use
BACKUP & RECOVERY VIDEO SURVEILLANCE HIGH PERFORMANCE BIG DATA, ANALYTICS
Platform COMPUTING
Platform Platform
DE2000H, DE4000H DE2000H, DE4000H Platform DE4000H, DE4000F
DE4000H, DE4000F DE6000F
DE6000H, DE6000F
As you continue your conversation with your customers, think about customer
needs that are ideal for Lenovo DE Series solutions. DE Series systems can
help customers who need higher performance, who are concerned with cost and
reliability, and who need a solution that is easy to use.
The four areas where you can position DE Series include data protection,
physical and cyber security, including video, technical computing, and big data
analytic applications.
DE-Series Controllers
DE2000H DE4000H DE6000H
Ch 1 Ch 2 Port 1 Port 2 P1 P2
Port 1 Port 2 Lnk Lnk Lnk Lnk
Lnk Lnk Lnk Lnk
e0a e0b
Lnk Lnk
LNK LNK Ch 1 Ch 2
4 16 8 4 16 8 4 16 8 4 16 8
Lnk Lnk Ch 1 Ch 2 S A S A S A S A
FC Host
Ch 1 & 2
Ch 3 & 4
ID/
Port 1 Port 2 Drive Expansion Diag iSCSI Host
ID/ EXP1 EXP2
Diag
DE-Series expansions
DE600S DE240S DE120S
4U/60 disks 2U/24 disks 2U/12 disks
12-Gbps SAS
Architecture
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
224C 212C
1200GB
1200GB
1200GB
1200GB
1200GB
1200GB
1200GB
1200GB
1200GB
1200GB
1200GB
1200GB
1200GB
1200GB
1200GB
1200GB
1200GB
1200GB
1200GB
1200GB
1200GB
1200GB
1200GB
1200GB
3.0TB 3.0TB 3.0TB 3.0TB
DE-SERIES CONTROLLERS
The DE-Series product portfolio consists of three families of system, which are defined by
their controllers. The controllers determine the number of disks that the storage system
can support. The DE2000H controllers support up to 96 disks. The DE4000H controllers
support up to 192 disks. The DE6000H controllers support up to 480 disks.
[1] The DE2000H system represent the entry-point family of storage systems for
customers who want to maximize the price and performance ratio and capacity mix of a
storage system.
[2] The DE4000H system optimize performance for mixed workloads, with outstanding
low latency
[3] The DE6000H system offer excellent performance. They support raw data throughput
rates of up to 12 gigabytes per second. These system are targeted at high-performance
computing markets, big data, and virtual desktop infrastructures, although they work
equally well in general computing environments.
DE4000H and DE6000H offers all-flash and hybrid configuration options. These options
are highly reliable and cost-effective.
Expansions
▪ 60 x 3.5" or 2.5" drives ▪ 24 x 2.5" drives ▪ 12 x 3.5" or 2.5" drives
▪ Highest throughput ▪ Highest throughput ▪ Lowest entry point
▪ Largest capacity ▪ Largest capacity ▪ NL-SAS, SSD
▪ NL-SAS, SAS, SSD ▪ SAS, SSD
Dual 1-GbE Management Ports Serial Port Serial Port USB Port Dual 12-Gbps SAS Drive
(RJ45) (Mini USB) (Factory Use Only) Expansion Ports
DE2000 only support HIC which is 2-port SAS or 2-port iSCSI 1/10Gb RJ45)
On the left side, you see the base host interface ports. These include either two optical
FC/iSCSI ports or two RJ-45 iSCSI baseboard ports, for host connection.
The dual Ethernet management ports provide out-of-band system management access.
To the right of the base host ports are two serial ports. The mini-USB port is used for
direct connections to the internal shell operating system of the controller and enables
advanced troubleshooting and configuration. The USB port is for factory use only.
On the far right, two 12-gigabit-per-second expansion SAS ports support the addition of
expansions. Because these are 12-gigabit-per-second ports, they require a SAS-3 to
SAS-2 converter cable to connect to the 6-gigabit-per-second expansions.
The two 12-gigabit-per-second expansion SAS ports on the far right support more
expansions.
You can create space for more host ports by using an add-on HIC. You can select from
12-gigabit-per-second SAS ports, 16-gigabit-per-second FC ports, a 10-gigabit-per-
second optical iSCSI card, or 10-gigabit-per-second copper iSCSI ports.
The controller status LEDs on the controller canister define different controller base
features, such as cache active, attention, and heartbeat.
DE2000H
DE2000H
DE6000H/F
Controller memory 16GB DDR4 per controller (Hybrid), 64 DDR4 per controlles (ALL Flash)
DE6000H controllers use a 8-core CPU. You can order the DE6000H controller
with 16 gigabytes of native cache.
The DE6000H system supports in-band management access. It has two 12-
gigabit-per-second wide-port SAS drive expansion ports for redundant drive
expansion paths.
The DE6000H does not come with Dual 10Gb iSCSI optical or dual 16Gb FC.
You can also order the appropriate HIC when you order the controllers.
Dynamic Disk Pools (DDP) technology consists innovative method of: (1) greatly
simplifying storage management; (2) making the addition or loss of drives a nonevent; and
(3) significantly reducing the time to recover from a drive loss versus with traditional RAID
(minutes versus days).
DE-Series supports hybrid systems with mixed flash and rotating disk. SSD cache is a
feature that is designed to accelerate HDD data access by caching highly read data on
SSD automatically.
Snapshot copies and views allow multiple recovery points or the use of production data for
testing and development.
Thin provisioning is used for capacity-optimized configurations and eliminates guessing on
how much capacity that some volume is really going to need.
Mirroring and replication support both synchronous and asynchronous mirroring and
replication.
Encryption is AES-256 with a local key manager saves the (often significant) cost of an
external key manager and is now an included feature of the Lenovo SAN Unified Manager
FOD - SW Features
Premium Features
Lenovo DE Series Hybrid models require additional license for more then 128 snapshots and
Asynchronous or Synchronous Mirroring.
All Flash models come by default with maximum number of snapshots included (DE4000F 512
snapshots, DE6000F 2048 snapshots). Asynchronous Mirroring is also included in All Flash
models while Synchronous Mirroring is always additional fee-based license.
DE Series Key
PB 5-10
LENOVO NETAPP Lenovo
PB 1-4
Lenovo
DE600S DE600S
Lenovo
DE4000H
DE4000F
PRICE
Lenovo
DE2000H
*Note: A function/ performance limited Ver of 2800
Lenovo Accredited Learning Provider PERFORMANCE / SCALABILITY 17
Management Interfaces
Management Methods
Out-of-band Management
In-band Management
The access volume exists with the SANtricity storage operating system. The
volume is mapped to a default LUN number,
typically LUN 7. The access volume does not consume any disks or storage
capacity. However, the volume exists as a logical entity for processing of in-band
management commands and for host discovery. The volume is not displayed in
the logical pane of the SANtricity Storage & Copy Services tab but does appear
on the Host Mappings tab.
NOTE: The access volume can cause issues with preconfigured scripts. You
might need to include syntax at the start of preconfigured scripts that removes
the access volume, but the volume should exist in some form for full
functionality
Select Typical to install all available packages on the host from which the
installer was started. Use this option when using the host for storage system
management and to send I/O.
Select Management Station to install only the SMclient package. Use this
option when using the host only for
storage system management (not to send I/O).
Select Host to install the packages that data hosts use. Use this option
when using the host only to send I/O to the storage system and not for
management commands.
Select Custom to display a page on which you can select the packages
that you want to install.
SMCli
• Part of the Utilities installation is also SMCli which is a Command Line interface
for DE Storage management
• Some features are not available on GUI (Thin Provisioning), therefore SMCli is
required
Example:
smcli 10.0.5.21 –u admin –p Passw0rd –k
show storageArray healthStatus;
SMCli usage
- Interactive mode
- First you have to login to Storage controller
smcli IP_Address –k –u Username –p Password
- After login you write commands one per line ended with semicolon ;
show allvolumes summary;
- Command line storage script
- Login to Storage and commands are in the same line
smcli IP_Address –k –u Username –p Password
–c „show allvolumes summary;“
- Script file
- Create txt file and store commands in the script file (Commands only)
smcli IP_Address –k –u Username –p Password –f Filename.txt
https://thinksystem.lenovofiles.com/help/index.jsp?topic=%2Fthinksystem_storage_co
mmand_line_interface_11.50%2F350BCB3A-B34B-43DB-882F-DB18BFC1B44C_.html
RAID 0 rotates logical blocks for a given volume space across a set of disks with
no space allocated for data protection or redundancy.
RAID 1 is similar to RAID 0, but maintains a copy of all data on a mirror set of
disks. RAID 1 is sometimes called RAID 10 if it uses more than two disks at a
time.
RAID 5 allocates space for parity information. This parity information can be
used to recover data if hardware fails.
RAID 6 is similar to RAID 5, but RAID 6 allocates two spaces for parity. This
additional space is called the “Q” value on DE-Series storage systems.
Volume-Group Limitations
• You must select individual disks based on required capacity and speed levels.
• You cannot mix disk technologies (SAS, Near-Line SAS, SSD).
• RAID 5, and RAID 6 groups are limited to 30 disks.
• Manual or automatic selection of the drives after selecting RAID level
Some limitations apply to the creation of volume groups. [1] Customers cannot
mix different disk technologies, even when they manually choose individual
disks for the group. [2]Also, RAID 3, RAID 5, and RAID 6 groups can have no
more than 30 disks.
When a drive fails, one of the hot spares is picked, and the failed drive data is
reconstructed onto this hot spare drive. This causes a bottleneck of I/O while
the rebuild is sequentially being recreated. Access to the logical drive with the
failure is significantly diminished during this time.
DDP Example
24-disk pool
D-piece: 512 MB
D-stripe: 4 GB
Dynamic data pool configuration: RAID 6 (8+2)
10 d-pieces: 1 d-stripe
1 d-stripe: 8 data d-pieces, 1 parity d-piece, 1 Q parity d-piece
Lenovo Accredited Learning Provider 40
Each d-piece is a contiguous 512-MB section of a disk. Within the disk pool, 10
d-pieces are written to 10 different disks, as selected by the controllers.
Together, [1]10 associated d-pieces make up a 4-GB d-stripe.
In this illustration, each color represents a d-piece written to a single disk. If you
look for orange pieces, you see that an orange d-piece has been written to each
of the 10 disks called out with orange arrows. [3]The 10 d-pieces make up the d-
stripe. The 10 disks are chosen pseudo-randomly by an algorithm that the
controllers run, and this “randomness” gives more protection if a disk fails in the
disk pool. Sometimes you may also hear a d-stripe referred to as a “mini RAID
group.”
• If a disk fails, each d-stripe on the failed disk that contains data must be rebuilt:
Segments on other disks are read to re-create the data.
Data is written to a set of 10 disks in the pool.
• Rebuild operations run in parallel across all disks.
If one of the disks [1]fails, the other d-pieces of the d-stripes are used to
recreate each d-piece from the failed disk on another disk. In effect, multiple
RAID 6 pieces are affected by the failed disk, but each of those RAID 6 pieces is
affected independently, which makes simultaneous rebuilding possible.
[2]Preservation capacity on other disks of the disk pool is used to write the
reconstructed data back into the pool. Because multiple d-pieces can be
reconstructed simultaneously to the preservation capacity on multiple disks, the
reconstruction process is much faster in disk pools than it is in volume groups.
Remember that some capacity from the disk pool is automatically reserved as
preservation capacity. Although this capacity appears in the manager as a
quantity of disks, it is actually spread across all disks in the pool.
When disk pools are created, preservation capacity is reserved for emergency
use—much like the way a hot spare is used in a volume group. This capacity is
expressed as a number of disks. [1]The default amount of capacity that is used
as preservation capacity depends on the number of disks that are in the pool.
[2]After the pool has been created, you can increase or decrease the number of
preservation capacity disks, or you can set the number of disks to 0 (for no
preservation capacity). The maximum amount of capacity that can be preserved
is 10 disks.
Flexible: Collaborative:
Add any* number of disks All disks in the pool
for more capacity. sustain the workload,
The system automatically which is perfect for virtual
rebalances data for mixed workloads or
optimal performance. fast reconstruction.
With Dynamic Disk Pools, you can add or lose disks without
impact, reconfiguration, or headaches.
Lenovo Accredited Learning Provider 45
DDP Flexibility
Flexible disk pool sizing optimizes enclosure use. There are two basic ways to
implement dynamic disk pools.
You can implement one pool for all volumes. This configuration maximizes
simplicity, protection, and usage.
You can implement multiple smaller pools, with one volume per pool. This
configuration maximizes performance for bandwidth-intensive applications and
clustered file systems.
Preservation capacity;
Data protection Hot spares
no idle disks
Snapshot images ✓ ✓
Traditional volume groups can still be valuable in DE-Series storage systems. Volume
groups offer superior I/O performance if they are properly configured. In contrast, disk
pools offer extra data reliability and flexibility. The cost of the advantages of DDP
technology is the performance impact of the overhead required to determine which 10
disks to use and then divide data into d-pieces. If a particular application needs the
absolute highest level of I/O performance, a volume group is probably the most
appropriate choice.
In contrast, disk pools are much more useful for general-purpose data applications, for
which data availability and flexibility are more important.
As storage systems grow and use more disks, the recovery advantage of disk pools
becomes even more valuable. With more disks, there is a greater chance of multiple-
disk failures. Because DDP distributes preservation capacity, it can rebuild data from
failed disks much faster than volume groups, while exposing the system to much lower
risk of multiple failures that lead to a catastrophic loss of data.
Also, some users might not want to “waste” capacity on hot spares, but cannot afford
the risk of multiple disk failures. Use of disk pools eliminates this difficulty by spreading
preservation capacity across all disks in the disk pool.
If you need to use thin provisioning, then you must build your volumes in disk pools;
thin provisioning is not available for volume groups.
DE Series features
Thin Provisioning
Description
– Decouple physical storage allocation from provisioning
o Allocate small, initial amount of storage Allocated/
Vol. B
Unused
o Pull needed storage from pool at the time it is needed Data
– Manage storage pool as a resource independent of logical drives Allocated/
Unused
– Share free space across all applications Vol. A
Data
– Thin provisioning feature requires use of DDP, not standard RAID options
Benefits
Savings
– Simplify Storage and Data Management
o Automate storage provisioning Available
o Eliminate impact of administrator uncertainty when sizing logical drives Storage
Data Vol. B
– Improve Storage Cost Efficiency
Data Vol. A
o Gain 35% - 40% improvement in storage utilization and efficiency
o Defer new storage capacity acquisition
o Reduce physical footprint and power and cooling requirements
Thin Provisioning is a capability some competitors have had for a while and it is
now being introduced on our products. The thin provisioning feature is provided
as part of the base feature with this release. There is no charge to use thin
provisioning.
And as most of you are aware thin provisioning allows the decoupling of physical
storage allocation from actual provisioning of storage. A smaller amount of
storage can be provided to the application than what logically exists. This allows
storage administrators to more easily manage their customers storage needs.
Used with DDP allows logical drive growth to be managed much more smoothly
and easily.
Some of the benefits found in the industry with thin provisioning include increase
storage efficiency. Gaining around 35-40% improvement in storage utilizations.
Reducing the total capacity and power of on-line storage. And the ability to
automate provisioning of storage.
Thin Provisioning
▪ Virtual capacity: Reported to hosts as a result of READ CAPACITY
commands
▪ Provisioned capacity: The amount of physical space that is allocated to the
volume
▪ Provisioned Capacity Quota: Limit to the automatic expansion of repository
▪ Consumed capacity: The amount of physical space that is currently
written to the volume, including user data and volume metadata
▪ Warning Threshold: Percentage of capacity quota consumed in the
repository, at which to alert an administrator
Thin Provisioning is a capability some competitors have had for a while and it is
now being introduced on our products. The thin provisioning feature is provided
as part of the base feature with this release. There is no charge to use thin
provisioning.
And as most of you are aware thin provisioning allows the decoupling of physical
storage allocation from actual provisioning of storage. A smaller amount of
storage can be provided to the application than what logically exists. This allows
storage administrators to more easily manage their customers storage needs.
Used with DDP allows logical drive growth to be managed much more smoothly
and easily.
Some of the benefits found in the industry with thin provisioning include increase
storage efficiency. Gaining around 35-40% improvement in storage utilizations.
Reducing the total capacity and power of on-line storage. And the ability to
automate provisioning of storage.
SSD Cache
The SSD Cache feature uses dedicated SSDs that are logically grouped to
hold frequently accessed data from user disk volumes. An SSD cache
functions as a secondary cache to the primary cache on the controller. Using
an SSD cache offers the following benefits:
▪ Read performance is limited by disk I/O per second (IOPS).
▪ A high percentage of read operations (greater than 80%) exist, relative to write
operations
Hardware page.
FC and
SAS
Near-Line
SAS and
SATA
Here you can see how the SSD Cache feature differs from automated tiering
approaches that rely on physical data migration.
The SSD Cache feature is data-driven, real-time, and self-managing. It provides real-
time assessment of workload priorities. SSD Cache also optimizes I/O requests for cost
and performance without the need for complex data classification or excessive data
movement.
Several competitors use the approach in the right pane. This approach requires a type
of automated data migration in which data blocks are physically moved between media
tiers. Because resources must be used to move the data, this tiering method requires
additional I/O and CPU overhead, and results in delays. SSD Cache promotes data
dynamically and in real time.
Traditional automatic tiering systems have the advantage of supporting write-intensive
applications. If you know in advance which data needs to be promoted and moved, you
can expect some write-performance benefit. However, if you don’t know what data is
critical and high-use, you may accidentally place it on media to be tiered.
SSD Cache is data-dependent, so ensure that you understand the problem that you are
trying to solve before you select which volumes to use with SSD Cache.
Snapshot Features
• Provides fast-copy services
• Creates storage-based logical images
with many advantages:
Point-in-time volume references
Images of volumes in disk pools, volume-group, or
thin volumes
Use of Snapshot group repositories
Efficient use of space
• Does not provide full, physical copies of
volumes, but can provide read/write
accessibility to hosts via Snapshot
volumes
• Is compatible with Microsoft Volume
Shadow Copy Service (VSS) and Virtual
Lenovo Accredited Learning Provider Disk Service (VDS) 54
The Snapshot feature enables you to create storage-based logical images. You
can use these local point-in-time virtual images of a volume for testing, backup,
and recovery operations. For example, Snapshot images enable you to quickly
roll back to a known-good dataset to reverse the effects of viruses, data
corruption, and accidental deletions.
The Snapshot feature uses one data repository for all of the Snapshot images
that are associated with a base volume. Therefore, when a base volume is
written to, the Snapshot image feature requires only one write operation instead
of multiple, sequential writes. To use storage capacity more efficiently, the
feature combines Snapshot images into Snapshot groups, each of which uses a
single repository.
Because a Snapshot image only saves the changed data for a base volume, it is
not directly accessible to hosts for read/write operations. However, you can
convert a Snapshot image into a Snapshot volume to give hosts read/write
access to it.
Microsoft Volume Shadow Copy Service and Virtual Disk Service provide
external storage management, data protection, and compatibility with backup
applications.
The Volume Copy feature enables you to create a point-in-time, full clone of a
volume by performing a byte-by-byte copy from a source volume to a target
volume. When the copy is complete, the target volume will be identical to the
source volume. Because the target volume is a real volume, rather than a logical
one, it can be used as part of a disaster-recovery solution. Both volumes must
be on the same storage system. The copy pair is an association between the
source volume and the target volume for a single Volume Copy operation.
If a volume is re-copied from the source to the same target volume, the data in
the target volume will be overwritten. Therefore, if you need a second clone of
the source volume, use a different target volume for the copy function.
Mirroring Features
The mirroring features provide [1]storage-based data replication, which enables you to
replicate data volumes from one storage system to another either synchronously or
asynchronously using [3]either Fibre Channel or IP.
Mirroring features maintain a copy of data that is physically distant from the site where the
data is used. If a disaster occurs at the primary site, such as a massive power outage or a
flood, the data can be quickly accessed from the remote location. Accessing the data from
a remote storage system is much faster than uploading off-site tape backups. Also, the
data that was in use at the time of the disaster does not differ at the remote site as much
as it might from a tape backup that is several days old.
DE-Series block-replication technology provides many benefits, such as block-level
updates, which reduce bandwidth and time requirements by replicating only the blocks that
have changed, crash-consistent data that is maintained at a disaster recovery site, the
ability to test disaster recovery plans without affecting production and replication,
replication between dissimilar DE-Series storage systems, and the use of a standard IP or
Fibre Channel network for replication.
The DE-Series storage systems offer two types of mirroring, which mirror data in
different ways to support different needs.
The synchronous mirroring feature is used for online, real-time data replication
between remote storage arrays. Any new data that is written to the local (or
primary) system is immediately transferred to the remote (or secondary) system.
The connection that links the local and remote storage systems must be fast, so
that network latency does not reduce local I/O performance.
6 2 5 4
Metro Mirroring is a synchronous mirroring mode. It means that the controller does
not send the I/O completion to the host until the data has been copied to both the
primary and secondary logical drives.
When a primary controller (the controller owner of the primary logical drive) receives
a write request from a host, the controller first logs information about the write
request on the mirror repository logical drive (the information is actually placed in a
queue). In parallel, it writes the data to the primary logical drive. The controller then
initiates a remote write operation to copy the affected data blocks to the secondary
logical drive at the remote site. When the remote write operation is complete, the
primary controller removes the log record from the mirror repository logical drive
(deletes it from the queue). Finally, the controller sends an I/O completion indication
back to the host system
When write caching is enabled on either the primary or secondary logical drive, the
I/O completion is sent when data is in the cache on the site (primary or secondary)
where write caching is enabled. When write caching is disabled on either the primary
or secondary logical drive, then the I/O completion is not sent until the data has been
stored to physical media on that site.
When a controller receives a read request from a host system, the read request is
handled on the primary storage system and no communication takes place between
the primary and secondary storage systems.
Important: The owning primary controller only writes status and control information
to the repository logical drive. The repository is not used to store actual host data.
Asynchronous mirroring
– Write operations to the secondary subsystem matches I/O completion order on the local
subsystem for all logical drives in the consistency group
– Effective only when multiple logical drives are placed in the consistency group
• Primary benefits
– Reduces impact of latency when replicating over longer distances
- Provides performance improvement – compared to synchronous –
for primary site I/O (subsystem and application)
- Enables effective replication over longer distances (WAN)
Lenovo Accredited Learning Provider 71
Recovery Guru
Event Logs
Event Logs
The DE-Series product lines continue to grow and expand, offering your customers more storage choices.
Modularity is a key design criterion for the DE-Series product lines. The DE-Series Hybrid line offers three
controllers and three sizes of expansions (12, 24, or 60 disks), multiple disk types, and a variety of host
connections—from iSCSI to FC. The DE-Series All Flash line offers two controller models in a 24-
controller. Additional expansions can be added.
Both DE-Series storage systems can be customized: You can adjust cache-block sizes, RAID levels, and
segment sizes, and choose between traditional RAID volume groups or self-monitoring and self-repairing
dynamic disk pools. By configuring individual volume settings or volume group settings, you can optimize
performance for streaming sequential, high-bandwidth workloads or for random-transaction
performance.
DE-Series storage systems all have six nines (99.9999%) in their reliability ratings.
79
Lenovo Accredited Learning Provider
Lenovo DE Storage – Technical Workshop – Student Lesson Guide
Course Summary
In the course you have learned about the following topics:
• Lenovo DE Series Storage positioning and overview
• Management Interfaces
• Configuration and Architecture
• DE Storage features
– SSD Cache, FlashCopy, VolumeCopy
• Disaster Recovery Scenarious
• Monitoring & Troubleshooting