Professional Documents
Culture Documents
NimbelStorage NTS-2001-I Student Guide R5 Feb2017
NimbelStorage NTS-2001-I Student Guide R5 Feb2017
Course Revision 5
Introductions
Name
Employer
Job function
Data storage experience
Hands on experience with Nimble products?
What do you hope to get out of this course?
Effective: 49.5TB
Raw: 48TB Usable: 33TB
(assuming 50% compression)
collateral utilizes
TB and TiB
Nimble OS uses TiB
AF9000
300,000 IOPS
AF7000
230,000 IOPS
AF5000
120,000 IOPS
AF3000
www.nimblestorage.com/technology-products/all-flash-array-specifications/
4U Chassis
48 SSD Drive capacity
2 Banks of 24 x 3.5” SSD drives
Bank A – SSD Drives 1 -24
Bank B – SSD Drives 25 - 48 Slot 21 Slot 22 Slot 23 Slot 24
Slot 17
Slot 13
Slot 9
Slot 5
Power On LED
Slot 17 Heartbeat
NIC1/2 LED
Slot 13
Slot 9 Power Fault Over
Slot 5 Temperature
DFC latch
Slot 1 Slot 2 Slot 3 Slot 4
Bank B
Bank A
Base Carrier
Release
SSDCarrier
Release
5
3
4
6 7
1. Power Supply 3. Fans 5. Data Networks 7. SAS Ports
2. Power Intel 4. Management Network 6. KVM/Serial Port 8. Controller
© 2016 NIMBLE STORAGE | CONFIDENTIAL: DO NOT DISTRIBUTE Course Revision 5 1-20
Controller Detail View (AF-Series and Newer CS-Series)
Or
Spares Kit
Controller
(NICs/HBAs not included) Power Supply
Additional Spares
AF-Series SSD
NIC / HBA
Cables
8K 4K 4K
ack
Mirrored
1) Writes are sent by a variety of
applications in variable block sizes
NVDIMM (Active CTRL) NVDIMM (Standby CTRL)
2) CASL places incoming writes into the
8K 4K 4K 8K 4K 4K
active controllers NVRAM
3) CASL mirrors the active controllers
NVRAM to the standby controllers DRAM
NVRAM
4) CASL acknowledges the write
CS-Series AF-Series
8K 4K 4K
ack
Mirrored
5) Blocks are copied into DRAM
a) What happens next depends of
NVDIMM (Active CTRL) NVDIMM (Standby CTRL)
type of array
8K 4K 4K 8K 4K 4K
b) All-Flash Array
1. Variable block deduplication
is applied DRAM
2. Variable block compression 8K 4K 4K
Variable Block Variable Block
is applied Deduplication Compression
Sequential Stripe 10MB
3. Blocks are formed into a
10MB sequential stripe
write Index
4. Blocks written to SSD and
are indexed in DRAM
CS-Series AF-Series
Index
DRAM
Variable Block
Deduplication
AF-Series
Index
AF-Series: Deduplication
Deduplication Primer
Inline Deduplication
Vs.
Source / Server Post Process
Deduplication
Inline Deduplication:
• Dedupe applied as the writes are processed
Batch Flush
Application Aware:
DB VDI DB Exchange
VDI Exchange VDI VDI
Deduplication
Performance • Better than block-by-block
• Secured by strong SHA2-256 encryption
On – On –
Off
Globally Domain/Application
Global Deduplication
V9
Like colored volumes
denote block sharing Dedupe = OFF
Deduplication Savings
Savings Ratio Participating
Volumes
3.79 TB 2.55X 18 of 18 vols
0B 1.0X 1 of 1 vols
© 2016 NIMBLE STORAGE | CONFIDENTIAL: DO NOT DISTRIBUTE Course Revision 5 1-38
Mirrored
DRAM
AF-Series
Index
AF/CS-Series: Compression
What you need to know about Lempel-Ziv 4
Compression Savings
Savings Ratio
1.4 TB 2.33X
0B 1.0X
*March 2016
DRAM
Sequential
Variable Block Variable Block Stripe 10MB
Deduplication Compression
Index
AF-Series
Index
Pros
•Simple to implement, long
history
Cons
•Poor random write performance
Cons
•Performance degrades over time
DRAM
Sequential
Variable Block Variable Block Stripe 10MB
Deduplication Compression
Index
AF-Series
Metadata
Parity
© 2016 NIMBLE STORAGE | CONFIDENTIAL: DO NOT DISTRIBUTE Course Revision 5 1-51
Mirrored
DRAM
Sequential
Variable Block Variable Block Stripe 10MB
Deduplication Compression
Index
AF-Series
Index
D P Q R S D D D D D D
P Q R D D D D D D D D
Q R D D D D D D D D P
R D D D D D D D D P Q
D D D D D D D D P Q R
DRAM
Sequential
Variable Block Variable Block Stripe 10MB
Deduplication Compression
Index
AF-Series
Index
Goal:
Enable the NOS to take action to help reset or recover an unhealthy drive.
See whether there is any way the RAID system can bypass the painful long rebuild or reduce its
impact to minimum
Notes:
Not applicable when replacing a failed drive
“Incremental” rebuild, subject to certain conditions
DRAM
Sequential
Variable Block Variable Block Stripe 10MB
Deduplication Compression
Index
AF-Series
Index
SmartSecure Encryption
SmartSecure Software Based Encryption
What is SmartSecure?
Encryption that:
Ensures the secrecy of data “at rest”
» Uses AES-256-XTS cipher for cryptographic protection of data
» FIPS 140-2 level 1 certified *****
Mirrored
1) Read from NVDIMM
2) If not found, check DRAM
NVDIMM (Active CTRL) NVDIMM (Standby CTRL)
3) If not found, read from SSD using the
index for a quick lookup 1 8K 4K 4K 8K 4K 4K
Index
3
AF-Series
Index
4 Node AF9000
Cluster
AF9000
300K IOPS Up to 1.4M IOPS
8180TB Capacity
AF7000
230K IOPS
IOPS based on
a 70% Read
and 30% write
workload AF5000
120K IOPS
SCALE UP
AF3000
50K IOPS
SCALE DEEP
SCALE DEEP
Power Button
Path A Path B
SAS IN port
Expander FW Status
Green-OK
Red-FW Failed to Load SAS Link Status
Green-12Gbps Red-Fault
Orange-6Gbps Off-No link
Support up to 2 AFS2.
First AFS2 - Use P1
Second AFS2 – Use P2
SAS
– Do not daisy chain AFS2 expansion
port (p2)
On Off
» Power expansion shelves first, then the – Power off the controller shelf and then
controller shelves the expansion shelves
Controller Shelf
Expansion Shelf
AF7000
IOPS based on
230K IOPS AF-Series Scale Up
a 70% Read
and 30% write
workload AF5000
120K IOPS
SCALE UP
AF3000
50K IOPS
Model Upgrades
To ensure the most accurate information regarding controller upgrades for the AF-
Series array download the Nimble All Flash Array Configuration Support Matrix from
InfoSight.
Management
ETH2
4 Remove the onboard USB stick from the old controller and install it into the new controller.
During controller upgrade, you must remove the USB stick from the existing controller and
install it into the new controller. Failure to perform this step will prevent the new controller
from coming online.
7 Perform a failover to the new controller and ensure that the new controller is in active mode.
• In WebUI – Manage Arrays >> Select individual array >> click “Make Active”
• In CLI – use command failover
8 Confirm the non-upgraded controller is in standby mode and repeat steps 2 – 7 for the opposite
controller:
2. Label and disconnect all cables running to the Standby controller.
3. Remove the Standby controller
4. Remove the onboard USB stick from the old controller and install it into the new controller.
5. Install new controller and reconnect all cables.
6. Verify the controller powers up and is in standby mode.
7. Perform a failover to the new controller and ensure that the new controller is in active mode.
9 Verify that the model number in the WebUI or CLI reflects the new model number.
AF3000,
60K IOPS
Automatic data
SCALE DEEP
migrations
AF3000 and 5000 + 1 AFS2 AF7000 + 2 AFS2 AF9000 + 2 AFS2
Pools
Nimble Scale-Out Group
» Simplify management
Array A1* Array A2 Array A3 Array A4
» Easy to grow or shrink
• Array is used in only 1 pool at a time L
» Can migrate HW (live) Pool Pool
NIC 1 NIC 2
Switch 1 Switch 2
Course Revision 5a
Module 02 Objectives
CS7000
• 4U24
CS5000 • Up to 230K IOPS
• 21TB to 882TB raw
CS3000 • 4U24 • 6 Expansion shelves
• Up to 120K IOPS • 1 AFS
• 4U24 • 21TB to 882TB raw
CS1000/H • Up to 50K IOPS • 6 Expansion shelves
• 4U24 • 21TB to 882TB raw • 1 AFS
• Up to 35K IOPS • 6 Expansion shelves
• 11TB to 882TB raw • 1 AFS
• 6 Expansion shelves CS700
• 1 AFS
• 3U16
• 125K IOPS
CS500 • 12TB to 612TB raw
• 3U16 • 6 Expansion shelves
CS300 • 100K IOPS • 1 AFS (25.6TB max)
CS235 • 12TB to 612TB raw
• 3U16
• 6 Expansion shelves
• 30K IOPS
CS215 • 3U16 • 12TB to 612TB raw
• 1 AFS (25.6TB max)
CS210 • 15K IOPS • 6 Expansion shelves
• 3U16 • 24TB to 612TB raw
• 3U16 • 15K IOPS • 1 AFS (25.6TB max)
• 3x 6TB Expansion shelves
• 15K IOPS • 12TB to 282GB raw • No AFS
• 8TB to 98TB raw • 3x 6TB Expansion shelves
• 1x 6TB Expansion shelves • No AFS, no FC
• No AFS, no FC
Portfolio Consolidation
© 2016 NIMBLE STORAGE | CONFIDENTIAL: DO NOT DISTRIBUTE Course Revision 5a 2-4
Adaptive Flash Array: CSx000
CS7000
Up to
230,000 IOPS
CS5000
Up to
120,000 IOPS
CS3000
Up to
50,000 IOPS
CS1000 IOPS based on a 70%
CS1000H Read and 30% write
workload
Up to
35,000 IOPS
www.nimblestorage.com/technology-products/all-flash-array-specifications/
© 2016 NIMBLE STORAGE | CONFIDENTIAL: DO NOT DISTRIBUTE Course Revision 5a 2-5
CSx000 Hardware Tour
What’s in an CSx000 Array? Similar components as an AF-Series
Back: Dual Power
Supplies (AC and DC
available)
Slots 21-24
Slots 17-20
Slots 13-16
Slots 9-12
Slots 5-8
Slots 1-4
© 2016 NIMBLE STORAGE | CONFIDENTIAL: DO NOT DISTRIBUTE Course Revision 5a DC latch 2-8
Drive Layout – CS1000, CS3000, CS5000, CS7000
Slots 21-24
Power Over
Slots 9-12
Fault Temperature
Slots 5-8
Slots 1-4
DFC latch
Cache: 3x Dual Flash Carriers
Bank B Bank A latch
First 11 HDDs
referred to as
CS1000H
Cache:
2x Dual Flash Carriers Fully
populated
CS1000H is
referred to as
CS1000FP
once upgraded
The WebUI will only display CS1000. To identify a CS1000H or CS1000FP look at the
controller shelf capacity or navigate to Manage >> Array >> [Select array] and view the visual
representation.
Encryption
Sweeping
8K 4K 4K
ack
Mirrored
5) Blocks are copied into DRAM
a) What happens next depends of
NVDIMM (Active CTRL) NVDIMM (Standby CTRL)
type of array
8K 4K 4K 8K 4K 4K
b) All Flash Array
1. …
c) Hybrid Flash Array DRAM
1. Variable block compression 8K 4K 4K
Variable Block
is applied Compression
Sequential Stripe 4.5MB
2. Blocks are formed into a
4.5MB sequential stripe
write
Index
3. Sequential stripe is written
to hard disk drives
CS-Series
4. Cache worthy data and any
data destined for pinned
volumes is also sent to SSD
5. Blocks are indexed
Index
Stripe 0 D1 D2 D3 P Q R
Stripe 1 D2 D3 P Q D1 R
The system will shutdown if
Stripe 2 D3 P Q D1 D2 R there are three disk failures
prior to any one of those failed
Stripe 3 P Q D1 D2 D3 R disks being rebuilt.
Stripe 4 Q D1 D2 D3 P R
Stripe 5 D1 D2 D3 P Q R
© 2016 NIMBLE STORAGE | CONFIDENTIAL: DO NOT DISTRIBUTE Course Revision 5a 2-17
CS-Series Read Operations
CASL Architecture
Mirrored
1) Read from NVDIMM
2) If not found, check DRAM
NVDIMM (Active CTRL) NVDIMM (Standby CTRL)
3) If not found, read from SSD
• If found, validate checksum, 1 8K 4K 4K 8K 4K 4K
3 4
The CS-Series arrays have the CS-Series
same changed block benefit
demonstrated in the AF-Series
arrays. 5
Index
8K 4K 4K
ack
Mirrored
Blocks are copied into DRAM
a) What happens next depends of
NVDIMM (Active CTRL) NVDIMM (Standby CTRL)
type of array
8K 4K 4K 8K 4K 4K
DRAM
8K 4K 4K
Variable Block Variable Block
AF-Series Sequential Stripe 10MB
Deduplication Compression
CS-Series
CS-Series AF-Series
CS7000 4 Node
230K IOPS CS7000
AFS Cluster
Up to 25.6TB
of flash Up to 920K IOPS
Multi-PB Capacity
CS5000
IOPS based on 120K IOPS
70% Read and
30% Write
workload CS3000
50K IOPS
SCALE UP
CS1000 /
CS1000H
35K IOPS
SCALE DEEP
SCALE DEEP
4U24 Chassis
24 x 3.5” Slots carry 21x HDDs + 3x DFCs
HDDs: 18 + 3 RAID
» New Nimble-branded HDD carriers
DFCs : Bank A pre-configured with 3 SSDs
» Bank B available for cache upgrades
SAS
Out port
SAS In
Expander FW Status
port
Green-OK
Red-FW Failed to Load SAS Link Status
Green-12Gbps Red-Fault
Orange-6Gbps Off-No link
CSx000 ES2
Controller Shelf Expansion Shelf
ES2 ES2
Expansion Shelf Expansion Shelf
ES2 ES2
Expansion Shelf Expansion Shelf
ES2
Expansion Shelf
Activated
Once and expansion shelf is activated, it cannot be removed from the solution.
CS5000
IOPS based on
50% Read and
120K IOPS CSx000 Scale Up
50% Write
workload CS3000
50K IOPS
SCALE UP
CS1000 /
CS1000H
35K IOPS
Bank B
… the Cache Pool shrinks by the size of … the Cache Pool grows by the size of
the removed SSD the added SSD
© 2016 NIMBLE STORAGE | CONFIDENTIAL: DO NOT DISTRIBUTE Course Revision 5a 2-35
All-Flash Shelf
ES2 ES2
Expansion Shelf Expansion Shelf
ES2 ES2
Expansion Shelf Expansion Shelf
ES2 ES2
Expansion Shelf Expansion Shelf
Capacity Controller
upgrade to upgrade to
full-pop CS3000
The WebUI will only display CS1000 or CS3000. To identify a CS1000H-11T, CS1000-22T, or
CS3000H-22T look at the controller shelf capacity or navigate to Manage >> Array >> [Select
array] and view the visual representation.
© 2016 NIMBLE STORAGE | CONFIDENTIAL: DO NOT DISTRIBUTE Course Revision 5a 2-39
Nimble Storage CS-Series Scale-to-Fit
4 Node
CS7000
CSx000 Scale out Cluster
Up to 1.4M IOPS
Same capabilities as the AF-Series Multi-PB Capacity
Course Revision 5
Module 03 Objectives
https://10.206.9.110
Capacity
Events
Performance
See Appendix A of the Nimble Storage User Guide for a complete listing of capabilities by command
Click Join 4
Breakdown of
usage and
savings by
application
category
© 2016 NIMBLE STORAGE | CONFIDENTIAL: DO NOT DISTRIBUTE Course Revision 5 3-23
Monitor >> Performance
View performance
metrics for selected
timeframe on selected
volume(s)
Launch InfoSight
CS-Series – monitor to view a
cache hit/miss. If miss breakdown of
is high, upgrade cache latency factors
Pause data
Select
stream
timeframe to
display
View performance
metrics for each
interface
View the initiator and number of connections for each volume Mouse
over
connection
number to
view
Mouse
connection
over
addresses
volume to
view
details
AF-Series
Controller A Controller B
Active
Standby Standby
Active
Failover
© 2016 NIMBLE STORAGE | CONFIDENTIAL: DO NOT DISTRIBUTE Course Revision 5 3-33
Module 04:
Course Revision 5
Module 04 Objectives
Our Mission
Maintain a maniacal focus on providing the
industry’s most enviable Customer Support
Comprehensive Telemetry
Comprehensive Telemetry
• Log
Log filesfiles
• Sensors (>30Million per day!)
Sensors (>30Million per day!)
• Health status
Health status
5-minute Heartbeats
• Array <------> Nimble Support
InfoSight
• Proactive Monitoring
Engine
Network Statistics
CPU Utilization
Snapshot status Write Latency
Statistics
Write IOPS
Compression
• Determine VM latency
factors: storage, host
or network
• Take corrective action
on noisy neighbor VMs
• Reclaim space from
underutilized VMs
InfoSight VMVision pinpoints VM-related issues
© 2016 NIMBLE STORAGE | CONFIDENTIAL: DO NOT DISTRIBUTE Course Revision 5 4-28
Predict Future Needs and Simplify Planning
Leverage predictive analytics to identify future needs and potential hot-spots specific to
your environment, with prescriptive guidance to ensure optimal long-term performance
Complete visibility through the cloud to all information you need to maintain a resilient
environment and ensure smooth operations
Blacklisting and
Dynamic Upgrade
Paths
Support Aggregate
Automation Studies
Wait on hold
Caller motives Welcome to level 3
or wait for a
questioned support
call back
© 2016 NIMBLE STORAGE | CONFIDENTIAL: DO NOT DISTRIBUTE Course Revision 5 4-42
Level 3 support as easy as 1-2-3
nimblestorage
EMEA
Reading, UK
AMERICA’S HQ
San Jose, Ca RTP Japan
Raleigh, NC Tokyo
APAC
Local toll free numbers Singapore
land in any of 5 Centers
around the globe seamlessly
7 days a week, 365 days a year.
Next Business Day – “NBD” 4 Hour Parts Delivery 4 Hour Parts Replacement - Onsite Engineer
• United States
• United States
• Canada
• Canada • United States
• Europe
• Europe • Canada
• Australia
• Australia • Europe
• New Zealand
• New Zealand • Australia
• Bermuda
• Bermuda • New Zealand
• China
• China (Major Cities) • China (Major Metros)
• Hong Kong
• Hong Kong • Hong Kong
• Indonesia
• Indonesia • Indonesia
• Israel
• Israel • Israel (Tel Aviv)
• Kenya
• Thailand • Thailand
• Thailand
• United Arab Emirates • United Arab Emirates
• Nigeria
• Singapore • Singapore
• United Arab Emirates
• Malaysia • Malaysia
• Singapore
• Philippines • Philippines
• Malaysia
• South Korea • South Korea
• Philippines
• India (Major Cities) • India (Major Cities)
• South Korea
• South Africa (Johannesburg) • South Africa (Johannesburg)
• India
• Taiwan • Taiwan
• South Africa
• Vietnam (Major Cities) • Vietnam (Major Cities)
• Taiwan
• Vietnam
How do we decide on new locations?
• We look at sales/partner coverage in the area
• Anticipate a minimum of 5 installed systems in the area within next 12 months
• 10 to 14 days to open a new Depot in most locations around the world
© 2016 NIMBLE STORAGE | CONFIDENTIAL: DO NOT DISTRIBUTE Course Revision 5 4-51
Module 05:
Course Revision 5
Module 05 Objectives
AF = 4U CSx000 = 4U
AFS2 = 4U AFS = 4U
ES2 = 4U
Precaution
Ensure there is at least 24” of cable slack
at the back of the array or shelf.
» Allow enough room to slide the array or
expansion shelf out 12” from the front of the
rack in order to replace a center chassis fan.
Connect one to
commercial power and
one to backup power.
Controller A Controller B
Interface Pairs
»Controller A eth1 & Controller B eth1
»IP addresses float between
Interface Pairs
Controller A Controller B
© 2016 NIMBLE STORAGE | CONFIDENTIAL: DO NOT DISTRIBUTE Course Revision 5 5-16
Ethernet Ports
eth5 eth6
Management Network
» The Array Management IP address
» The 2 Controller Diagnostic IP addresses
Data Network
» The Data IP addresses
Discovery Addresses
» iSCSI Discovery Address
Targets
Volume1
Volume2
Volume3
Note: in 2.x, every array in the group must have access to each subnet
© 2016 NIMBLE STORAGE | CONFIDENTIAL: DO NOT DISTRIBUTE Course Revision 5 5-22
iSCSI Switch Selection Guidelines
Network Design
Best if dedicated, redundant iSCSI network
Otherwise, use VLANs to keep iSCSI traffic separate
Switch Attributes
Good Quality Layer 2 or Layer 3 Managed
» Stacked preferred, but be aware of issues around stacking such as what happens when the
master switch in the stack fails and upgrading of switch firmware may require an outage
» ISLs at least, concern is under or over specifying the total bandwidth required
» ISLs – may want to use IP Address Zones to prevent iSCSI traffic from crossing the ISL
Support for Jumbo Frames (with Flow Control) is Desirable
Non-Blocking Backplane
» Bandwidth of the backplane >= (# of ports) * (bi-directional port speed)
Challenge
» How to minimize Inter-Switch Links for traffic? 172.16.0.81 172.16.0.82
eth1 eth2
» Especially if connections are automatic? Discovery redirects to Discovery redirects to
172.16.0.21 172.16.0.22
Single Data Subnet
Answer: IP Address Zones
» Discovers Data Address via Discovery IP
» Uses Host IP address to select Data Address Avoid
» Examples: Traffic?
• Even/ Odd: Odd vs Even #s kept together 172.16.0.21 172.16.0.22
(Data IP) (Data IP)
• Bisect: Low vs High #s are kept together
• Default: Single (no division)
Using Odd/Even zone .21 Management
.22
Subnet
172.16.0.20 .20
(Discovery)
Practice Notes
Do not use Spanning Tree Protocol Do not use STP on switch ports that connect to iSCSI initiators or the
(STP) or use PortFast Nimble storage array network interfaces.
Configure flow control on each switch Configure Flow Control on each switch port that handles iSCSI
port connections. If your application server is using a software iSCSI initiator
and NIC combination to handle iSCSI traffic, you must also enable Flow
Control on the NICs to obtain the performance benefit.
Disable unicast storm control Disable unicast storm control on each switch that handles iSCSI traffic.
However, the use of broadcast and multicast storm control is
encouraged.
Use jumbo frames when applicable You must have jumbo frames enabled from end to end for them to work
correctly
Testing Network connectivity Use the Ping command to test network connectivity and to help
determine if jumbo frames are enabled across the network, example:
vmkping -d -s 8972 x.x.x.x
ID
6296 UP CISCO UCS
ID
6296 UP
FC Switch A FC Switch B
Switch A
S
A
S
S
A
PCIe3
PCIe4
PCIe1
CIMC
PORT 1
PORT 0
S
S
A
PCIe5
PCIe 2
zone
PSU
2
650 W AC
1 2
PSU
1
M
Server (Initiator)
1. Verify Fabric/Zoning
» Single Initiator
» Single Target
2. Configure Initiator Group
» Server port
» Volume
3. Create volume and link to the initiator group
4. Verify connection from volume to assigned initiators
Course Revision 5
Module 06 Objectives
Important: The computer used to initially configure the array must be on the same
physical subnet as the Nimble array, or have direct (non-routed) access to it.
• Ensure Adobe Flash Player is installed
» Set a static IP: Set your IP address to the same subnet that your array management IP address
will be on.
» Have your array controllers A & B correctly cabled to your switch fabric per the previous
drawings.
» Complete all your switch configurations for Flow Control, Jumbo Frames, Spanning tree,
Unicast, etc.
» Install the Nimble Windows Toolkit (NWT) on the Laptop or Server you are using for the
installation.
Shows
• Array(s)
• Model
• NOS Version Number
Steps
• Select Array to Setup
AF-000917 All Flash Array
• Click Next Button
• The next Popup screen informs
you that Setup Manager is going AF-000917
Click “Finish”
Email Alerts
Click “Test”
© 2016 NIMBLE STORAGE | CONFIDENTIAL: DO NOT DISTRIBUTE Course Revision 5 6-29
Setup Email Alerts
3 After testing Email Alerts, click the “Save” button to save the settings
Manage
Arrays
5 Verify the Standby Controller is now set to “Active” and that all connections are good.
See Appendix B in Nimble Storage Installation and Configuration Guide 2.2 for complete listing
© 2016 NIMBLE STORAGE | CONFIDENTIAL: DO NOT DISTRIBUTE Course Revision 5 6-37
Outgoing Server Ports
*An array sends alerts through HTTPS POST back to Nimble Support if AutoSupport is enabled
**Default (configurable)
© 2016 NIMBLE STORAGE | CONFIDENTIAL: DO NOT DISTRIBUTE Course Revision 5 6-38
SNMP
Setup SNMP
Configure through the NimbleOS GUI (Administration > SNMP) or CLI snmp command
Arrays use the alert level setting for email alerts to determine the
events that are sent as SNMP traps
© 2016 NIMBLE STORAGE | CONFIDENTIAL: DO NOT DISTRIBUTE Course Revision 5 6-40
SYSLOG
SYSLOG
Support for Red Hat Enterprise Server and Splunk implementations of SYSLOG
Tasks:
Launch Nimble Setup Manager (NSM)
Recall key configuration steps
Launch array GUI and configure basic array parameters including autosupport
Tasks:
Utilize the CLI to complete initial setup
Course Revision 5
Module 07 Objectives
Physical storage
resource
Triple Parity
RAID or Triple Consumed Space
Volume
Parity+
Triple Parity
Volume
Volume Reserve
RAID or Triple
Parity+
A reservation reserves a
guaranteed minimum amount
of physical space from the
pool for a volume
© 2016 NIMBLE STORAGE | CONFIDENTIAL: DO NOT DISTRIBUTE Course Revision 5 7-6
Volume Quotas
Volume Reserve
Volume Quota
Volume
Pool
A quota sets the amount of a volume that can be consumed before an alert is sent and
writes are disallowed.
© 2016 NIMBLE STORAGE | CONFIDENTIAL: DO NOT DISTRIBUTE Course Revision 5 7-7
Snapshots
Snapped Volume
Snapshot Reserve – An accounting for a
set amount of space that will be
guaranteed available for the snapshot.
Snapshot Reserve
Snapshot Quota – An accounting for the
Snapshot Quota
total amount of space a snapshot can
consume.
AF-Series Performance
Policy list
The following parameters are set when building a custom performance policy:
» Application Category
» Storage Block Size
» Compression On/Off
» Caching On/Off
» Space Management Parameters
» Deduplication Enabled/Disabled
- AFA only
A set of host initiators that can be assigned access to a specified volume via Access
Control.
Can be created at volume creation or as a separate task
» Manage >> Initiator Groups
The parameters within an Initiator Group will depend on the protocol - iSCSI or Fibre Channel
Limiting Subnets
» Only targets on specified subnets can be
accessed by an Initiator Group
• Exchange consistent
snapshots
• SQL/Exchange uses MS
VSS framework and
requires NPM on the
Application Host – more
later
© 2016 NIMBLE STORAGE | CONFIDENTIAL: DO NOT DISTRIBUTE Course Revision 5 7-25
Add Schedules
Volume Name
» Helps to include host and app in
volume name
Performance Policy
» Use existing policy (based upon
app type)
Size
» Set the volume size seen by the
application
Click to
Thresholds expand
» Best to use the defaults
» Setting Reserve > 0 reduces thin
provisioning
Protection
» Best to use a volume collection for all
of your volumes
Protection Schedules
» Displays Protection Schedule
associated with selected volume
collection
Caching
» Normal – cache all hot data
» Pinned – cache entire volume
Volume
Filter
Volume Volume
Summary Usage Key
NCM for vSphere 6.0 or higher required to support VVols on iSCSI arrays
What is installed:
» Nimble Connection Service (NCS)
» Nimble Path Selection Plugin (PSP)
Linux NCM
» RHEL OS versions 6.5, 6.7, 7.0, 7.1
» Ensure that connection redundancy is always maintained to the Nimble array.
» Manage multipath connections (at the I/O region level) to volumes striped across multiple
arrays.
» Configure block device level settings for optimal performance.
» Automatically manage iSCSI and multipath configuration.
Prerequisites
» sg3_utils and sg3_utils-libs
» device-mapper-multipath
» iscsi-initiator-utils (for iSCSI deployments)
Tasks:
Build a volume
» Configure an initiator and an initiator group
» Create a volume collection
» Configure protection schedules
Tasks:
Connect to a host
» Launch NCM
» Configure the Windows host via NCM
» Connect and examine the new volume
» Prepare and mount the volume on a Windows host
Course Revision 5
Module 08 Objectives
What is a Snapshot?
Snapped Data
Changed Block
Snapped Data
Snapshot Reserve
Changed Block
Snapped Data
New Data +
Changed Blocks
10 snap!
10 snap!
If block B is changed, the original state
can be recovered by rolling back to the
snap taken at 10:00
10 11
10 snap!
The next snap taken captures the
change made to block B
Any snapshots can be used to recover from without the loss of snapshots taken before or
after the snapshot being recovered from.
Synchronous Replication –
• the process of copying data over a storage area network (SAN), local area network
(LAN) or wide area network (WAN) so there are multiple up-to-date copies of the data.
RPO = Zero
Asynchronous Replication –
• The write is considered complete as soon as local storage acknowledges it. Remote
storage is updated, but with some lag. Asynchronous Replication is a schedule based
event. Lost Data ≠ 0
Change Rate –
• Is the amount of data that is changed/modified in a given period of time
• Note: the higher the change rate, the more bandwidth you may need to ensure
RPO/RTO can be met.
Changing Data
A A’ A’’ A’’’
Time
Snapshot Schedules
» Determined by Volume Collections which
include the following parameters:
• Schedule name
• How often the snapshot should be taken
• Timing of the snapshot
• Which days to run the snapshot on
• Number of snapshots to retain
3. Enter a name for the snapshot and select desired Status and Writability settings.
4. Click OK to immediately take a snapshot
Manage >> Protection >> Volume Collections >> [Select a volume collection]
Pointers
1 3
Tasks:
Recover from a Snapshot
» Simulate a data loss event
» Create a zero-copy clone
» Connect to the clone and recover data
» Disconnect and delete the clone
Course Revision 5
Module 09 Objectives
SmartReplicate:
• Efficient (thin, block diffs+ data
reduction)
• WAN optimized
• Secure (AES-256bit encryption)
• No license required
CS700
DR
AF9000
Production
Primary 9:00
9:15
9:30
9:45
No backup window
Rapid local recovery
Backup Cost-effective, simple DR
Tier 3
(+Dedupe)
9:00
Disaster Tier 3 10:00
Recovery (+Dedupe) 11:00
12:00
Instantaneous zero-
Space 9:00 copy clones (e.g., for
efficient 10:00 test and dev instances)
11:00
clones 12:00
Partner:
» Identifies a Nimble array that will replicate to and/or from
Snapshot Schedule:
» Attribute of a volume collection
» Details when to snapshot and replicate and to which partner (one or more of
these per volume collection)
Throttle:
» Provides the ability to limit replication transmit bandwidth
Identifies a Nimble array that can replicate to and/or from this array
Must be created on upstream and downstream arrays
Attributes:
» Name: must match group name
» Hostname: must match array’s management IP address
» Secret: shared secret between partners
Connected: successfully established communications
» Management process re-affirms 1/minute
» Test function performs this on demand
Synchronized: successfully replicated configuration, updated as needed and every
4 hours
Pause/Resume:
» Terminate all in-progress replications inbound or outbound, to/from this
partner do not allow new ones to start until Resume
» Persists across restarts
Test (button in GUI):
» Perform basic connectivity test
• Management process Controller A to B and B to A
• Data transfer process Controller A to B and B to A
Throttles:
» Limit transmit bandwidth to this partner
» Scheduling parameters include days, at time, until time
» Existence is mutually exclusive with array throttles (a system can contain
array-wide throttles or partner-wide throttles, but not both)
© 2016 NIMBLE STORAGE | CONFIDENTIAL: DO NOT DISTRIBUTE Course Revision 5 9-11
Replication Partner Notes
• Replication happens on
Management IP
(default)
• Custom option available
IP/Subnet e.g. 10G interface
• Applies to Partner
Transfers
» Asynchronous / triggered by snapshots
» Transfers compressed snapshot deltas
1. Navigate to: Manage >> Protection >> Replication Partners >> New Replication Partner
Manage -> Protection -> Replication Partners -> New Replication Partner
Pools
» Destination replicas will be created by
default in the specified pool
Groups related volumes into a set that is snapshotted and replicated as a unit
Contains one or more Snapshot Schedules that specify:
» When to take snapshots
» To/from replication partner
» Which snapshots to replicate
» How many snapshots to retain locally
» How many snapshots to retain on the replica
» Alert threshold
Created on upstream array, automatically replicated to downstream
Replicated as configuration data along with all snapshot schedules that define a
downstream partner
» Sent to downstream partner as changes are made (transformed on
downstream, i.e. “Replicate To” “Replicate From”
» Volumes created in offline state downstream as needed
» Clones created downstream only if parent snapshot exists
Replication status:
» Completed: Replication to partner is completed.
» Pending: Replication to partner not yet started (pending completion of prior snapcoll)
» In-progress: Replication in progress and status shows amount of progress
» N/A: Upstream: non-replicable, Downstream: always shows this status
Replication
No Encryption
No Dedupe B
© 2016 NIMBLE STORAGE | CONFIDENTIAL: DO NOT DISTRIBUTE Course Revision 5 9-29
Replicating between 3.x Arrays using Dedupe and Encryption
No Encryption
Yes Dedupe B
© 2016 NIMBLE STORAGE | CONFIDENTIAL: DO NOT DISTRIBUTE Course Revision 5 9-30
Replicating between 3.x Arrays using Dedupe and Encryption
Yes Encryption
No Dedupe B
© 2016 NIMBLE STORAGE | CONFIDENTIAL: DO NOT DISTRIBUTE Course Revision 5 9-31
Replicating caveats
Encrypted
Yes Encryption
keeping Yes
Re-encrypted
Encryptionusing
Upstream
No Dedupe
domain key Downstream
Yes Dedupe
domain key
If the destination is not trusted (NO Dedupe Not using Encryption but supports encryption).
» In this case, the data is decrypted using upstream domain key, un-deduped and re-encrypted using the
upstream volume key, the data is then stored downstream as is
Re-Encrypt
Will decrypt
Un-dedupe
before sending
Stored as is
No Encryption
Yes Encryption No Dedupe
No Dedupe
Yes Dedupe Yes Encryption with Upstream
Volume Encryption key
© 2016 NIMBLE STORAGE | CONFIDENTIAL: DO NOT DISTRIBUTE Course Revision 5 9-34
SmartReplicate Disaster Recovery
General DR Operations
Handover
» Graceful transition between two site (no data loss)
» Examples:
• Handover to DR site for non-DR situations
• Handover from DR site when recovering from a DR situation
Promote
» Making the DR site primary with the data it has (possible data loss)
» Example:
• Promote a DR site during a disaster
Demote
» Clearing ownership status from a former source
» Example:
• Production system comes back up after promotion to DR site.
Upstream(Orig.
Temp Downstream Array Upstream Array) Temp Upstream (Orig. Downstream Array)
Snap 12PM
Handover
© 2014 Nimble
© 2016 NIMBLE STORAGE | CONFIDENTIAL: DO NOT DISTRIBUTE Course Revision 5 9-41
Replication Concepts - Handover
Upstream(Orig.
Temp Downstream Array Upstream Array) Downstream
Temp Upstream Array Array)
(Orig. Downstream
Reverse roles
Demote:
»Offlines volumes
»Relinquishes ownership of volcoll objects
»Stop taking local snapshots
Example:
»After an outage and promote away from an Upstream system;
Upstream System Comes back on-line:
• Prepares a system to become new downstream partner (to manually re-
establish reverse replication)
• or Prepares a system for Fail-Back (handover to the original Upstream
or Production System)
• Looks for common snapshot as a starting point to replicate from
© 2016 NIMBLE STORAGE | CONFIDENTIAL: DO NOT DISTRIBUTE Course Revision 5 9-43
Recovery Scenarios – Testing at DR site (still replicating)
1. Go to downstream replica
2. Clone the snapshot (create a first class volume)
3. Add/adjust ACLs on the volume
4. Mount the volume
5. Interrogate/Test the data and applications (via Windows,
ESX, etc.)
6. Unmount the volume
7. Delete the cloned volume
Failover to DR site
1. Promote downstream volume collections at DR site
2. Add/adjust ACLs on the volumes
3. Mount volumes to application servers (Windows/ESX)
4. Start production environment at DR site
Tasks:
Setup partner replication
» Configure the upstream array
» Configure the downstream array
» Test the connection stats