SVC V6 Update (6.1 6.2 6.3 6.4)

You might also like

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 26

IBM System Storage™

System Storage SAN Volume Controller (SVC)


From 6.1 to 6.4 release code

Clarence POUTHIER
Field Technical Sales Specialist
IBM France
clarence_pouthier@fr.ibm.com

© 2011 IBM Corporation


IBM Systems Storage™

SVC V6.1 Highlights


 Console/GUI usability enhancements
► New design incorporating SVC user feedback
► GUI embedded in SVC cluster software
► No separate SVC GUI Console code requiring upgrades
► Browser-based GUI accessible by other OS including Linux
► SVC objects can be named with up to 63 characters
► Allows TPC and Director users to seamlessly launch SVC task panes

 Volume Management Improvements


► Efficient SSD usage via dynamic migration of busy extents – Easy Tier
 Greater scalability by increasing controller WWNN limit to 256 from 64
 Accommodates more storage systems behind a single SVC cluster
– Some storage subsystems use more than one WWNN so allows more systems
behind SVC
 Cluster enhancements including 1PB MDisks and 8GB extents
 Allows for virtualizing LUNs greater than 2TB
 Allows for virtualizing up to 32PB of storage on a single SVC cluster

© 2011 IBM Corporation


2
IBM Systems Storage™

Data Migration – By Volume and Sub-Volume

SSD Storage Pool HDD Storage Pool

• Change Disk Class


• Change RAID Type
• Change RPM
• Change Striping

Volume Based Data Migration

Easy Tier Automated Data Migration Easy Tier


Enabled in Hybrid Pool

© 2011 IBM Corporation


3
IBM Systems Storage™

Exchange
Easy Tier – Architecture

Domino
A “hot spot” is formed on a magnetic disk

DB2
when an application makes frequent use of
the same area or extent of a volume
SVC Node

The I/O Monitor captures access patterns and


generates usage statistics sending them to
Volume the Data Placement Advisor

Hotspot Management Code


The Data Placement Advisor identifies hot
spots and outputs potential data migrations to
the Data Migration Planner
IOM DPA DMP DM

The Data Migration Planner performs


analysis to deliver recommended data
migration plan to the Data Migrator based on
the physical storage characteristics

Data Migrator confirms and schedules data


migration activity based on the data migration
plan using SVC function to seamlessly
relocate the data to higher performing storage
without any application interruption

Magnetic Disk SSD

© 2011 IBM Corporation


4
IBM Systems Storage™

SVC 6.2 Enhancements


 New Hardware Engine, 2145-CG8, with 10 GbE iSCSI
 Software enhancements
► VMware vStorage API for Array Integration (VAAI) and vCenter management
support
► Built in real-time performance monitoring
► Ability to use FlashCopy function with Remote Mirror volumes
► 4x larger maximum virtualized environments (32PB)
► Restored support for internal SVC SSDs
► Enterprise licensing

 Interoperability
► HP StorageWorks P9500 Disk Array
► Hitachi Data Systems Virtual Storage Platform (VSP)
► Texas Memory Systems RamSan-620
► EMC VNX models

© 2011 IBM Corporation


8
IBM Systems Storage™

10 Gigabit iSCSI
Host servers with 10
Gigabit iSCSI HBAs
 Server attachment to SVC using 10 Fibre-attached
Host servers with hosts
Gigabit iSCSI HBAs Gigabit iSCSI HBAs
 Not used for external storage
New
attachment or system-to-system
communication, which use Fibre LAN SAN Zones
Channel
 Enables customers to connect
servers to SVC using high- Storwize
performance, lower-cost IP V7000
networks
 Improves iSCSI throughput up to SAN Storage Zones
7x compared with 1 Gigabit
External Fibre-
 Enables support for more iSCSI- attached storage
attached servers, further helping (optional)
reduce costs

© 2011 IBM Corporation


9
IBM Systems Storage™

New SVC 2145-CG8 Engine Supports 10 Gigabit iSCSI


 New SVC engine based on IBM System x3550 M3 server (1U)
 Intel® Xeon® 5600 (Westmere) 2.53 GHz quad-core processor
 24GB of cache
 Four 8Gbps FC ports
 Minor improvements over Model CF8 in MB/s and IOPS
 Optional features, for each node you can choose:
 Solid State Devices (SSDs): up to four internal 146GB SSDs - OR -
 10 Gb Ethernet support for 10 Gb iSCSI host attachment and future FCoE support
– Two 10Gb ports per storage engine
– Storage engine supports 10Gb iSCSI or internal SSDs but not both
– Factory install or nondisruptive field upgrade (field upgrades ship starting August 1)
 New engines may be intermixed in pairs with other engines in SVC clusters
 Mixing engine types in a cluster results in volume throughput characteristics of the
engine type in that I/O group
 Cluster non-disruptive upgrade capability may be used to replace older engines with
new CG8 engines
 Replaces the SVC 2145-CF8 engine
10Gbps
iSCSI
8Gbps
FC

1Gbps 10Gbps
iSCSI iSCSI
© 2011 IBM Corporation
10
IBM Systems Storage™

VMware vStorage APIs for Array Integration (VAAI)


 Integration with vStorage APIs to enable  Improve performance and efficiency in
VMware control over common storage deployment and management of virtual
operations in a virtual server volumes
environment  Offload I/O from production virtual
 Includes the following operations: servers to improve application
 Optimized VM provisioning (Write performance
Same)  Dramatic improvements for some data
Uses storage system to zero out new intensive functions
volumes  Makes Storwize V7000 significantly
 Block locking (Atomic Test & Set) more attractive for VMware
Improved VMFS performance with environments
less reservation conflicts
 Fast copy (optimized cloning, XCOPY)
Storage-side volume to volume
cloning
VMware Storage Stack

VMFS NFS
Provisioning / Cloning
VMware LVM NFS
Data Mover Client
vStorage APIs
vStorage API
Network
NFS for Multi-
Pathing Stack

HBA Drivers NIC

© 2011 IBM Corporation


11
IBM Systems Storage™

Real-time Performance Statistics

 Gathers system level performance  Get “immediate” monitoring during


statistics (CPU utilization; port utilization environmental changes
and I/O rates; volume and MDisk I/O rate,  Troubleshoot sudden drops in performance
bandwidth, latency) in real time with
sampling rates down to 5 sec.  Pair up with TPC for complete performance
solutions
 Provides a snapshot view for immediate
monitoring with 5-minutes of performance
history

© 2011 IBM Corporation


12
IBM Systems Storage™

FlashCopy Target as Remote Mirror Source


 Enables the target of a FlashCopy operation to be a Remote Mirror
source volume
 Comparable to similar DS8000 capability
 Further example of Storwize V7000 delivering enterprise-class
function for mid-sized customers
 Increases flexibility in using FlashCopy
 Potential use cases
 Replicate backup copies created with FlashCopy function to remote
location
 Use FlashCopy function to restore corrupted volume (from a backup
copy) when that volume is the source of a Remote Mirror pair

FlashCopy

Remote
Mirror

© 2011 IBM Corporation


13
IBM Systems Storage™

SVC removes physical “site” license boundary


 SVC standard software (including FlashCopy and Metro/Global Mirror features) is
now licensed to the customer (enterprise), no longer requiring customers to obtain
a separate license to deploy SVC at additional physical sites within a country
 Change applies to new software transactions and new software maintenance
renewals/reinstatements only, and changing from multiple sites to a single customer
identity may require customers to transfer entitlement records currently in AAS into
Passport Advantage
 Combining under one customer identity does not use any value exchange of TBs in
different tiers, and customers are not entitled to any refunds for software licensing
 For example, EZ Storage, Inc. within the UK has 10 TB SVC licensing at physical
site A (London), 8 TB at site B (Manchester), and 2 TB at site C (Southampton). In
Ireland it has 10 TB at site D (Dublin) and 6 TB at site E (Cork), all under current
software maintenance. At the time of renewal, the customer:

1. Contacts the IBM sales representative stating the desire to


D B
consolidate SVC licenses
E
2. (Likely) has the sales rep. migrate any entitlements in AAS over into
A
Passport Advantage to allow the consolidation (This migration C
process is defined in the SVC Ordering presentation in the Sales Kit)
3. Renews software maintenance for 20 TB in the UK (12 TB Tier 1; 8
TB Tier 2), and 16 TB in Ireland (12 TB Tier 1; 4 TB Tier 2)

© 2011 IBM Corporation


14
© 2011 IBM Corporation

SVC 6.3 Enhancements


 Greater flexibility for remote mirror
► Replicate between SVC and Storwize V7000
► Balance remote data currency with network bandwidth cost to better meet
application requirements
 Miscellaneous enhancements
► LDAP support
► “Round-robin” port selection for virtualized disk

 Enhanced stretched cluster configurations


► Up to 300km between data center
► 3x EMC VPLEX synchronous distance

 Interoperability enhancements

15
© 2011 IBM Corporation

Extended Distance Stretched Cluster

What this is
 Enables enterprises to access
and share a consistent view of
data simultaneously across data
centers, and to relocate data
across disk array vendors and
tiers, both inside and between
data centers at full metro
distances
 The supported distance will
depend on application latency
restrictions
 100km for live data mobility
Why it matters
(150km with distance extenders)  Application clusters and virtual server mobility tools that
 300km for fail-over / recovery were designed for use within a single data center can
scenarios now be deployed across multiple data center locations
 SVC supports up to 80ms latency, at metro distances
far greater than most application
 SVC will surpass competitive offerings in terms of
workloads would tolerate
supported distances, while also offering full DR support
for complete disaster preparedness and recovery
capabilities
16
© 2011 IBM Corporation

Lower Bandwidth Remote Mirroring


Remote Copy Remote Copy
SAN Volume Controller
Primary Secondary SAN Volume Controller
Background
Copy

FlashCopy
(space-efficient) FlashCopy
Host Requires less link
I/O (space-efficient)
bandwidth
Guarantees a
consistent copy

What this is Why it matters


 SVC will provide a low-bandwidth tolerant  Bursts of host workload are smoothed
Remote Copy function by offloading mirroring over time so much lower link
functions to a FlashCopy of the production bandwidths can be used
data  In the future, acceptance of higher
 A background copy of the volume is performed latency on the link can lead to support
across the infrastructure enabling a higher for distances greater than 8,000km
RPO, and using significantly less bandwidth
 The FlashCopy of the production volume can
be space-efficient so as to conserve storage
capacity

17
Back
© 2011 IBM Corporation

SVC 6.3 Supported Environments


Linux Citrix Xen
(Intel/Power/z) Server
IBM IBM Power7 Sun
Microsoft HP-UX 11i RHEL 4/5/6
z/VSE Novell VMware IBM AIX Solaris Tru64 IBM TS7650G
z/VM
Windows
IBM i 6.1 SUSE 9/10/11 Apple ProtecTIER IBM 1024
NetWare ESX 3.5, Hyper-V 8/9/10 OpenVMS SGI IRIX BladeCenter
z/Linux OES2 vSphere 4, 5 (VIOS) Mac OS Gateway Hosts

New New

iSCSI New
Point-in-time Copy 1Gb or 10Gb 8Gbps SAN fabric SAN
Full volume, Copy on write Continuous Copy New
256 targets, Metro/Global Mirror
Incremental, Cascaded, Reverse, Multiple Cluster Mirror
Space-Efficient, FlashCopy Mgr

Thin Provisioned Volumes Easy Tier Internal SSD

New New New


Virtual Disk Mirroring
New New
Compellent IBM IBM IBM Hitachi HP EMC Sun NetApp NEC Fujitsu TMS
5.2.3 DS XIV N series Lightning MA, EMA CLARiiON StorageTek FAS iStorage Eternus RamSAN
Thunder MSA, EVA, CX4, StorEdge DX80 S2, DX90 S2, 440
DS3000, DS4000 Gen 3 Storwize V7000 TagmaStore XP, P9500, Symmetrix,
DS5000, DS6000 XioTech Bull DX410 S2, DX440 S2
AMS, WMS, 3Par
DS8000 DCS9XXX USP, USP-V DMX, VMAX, Emprise Storeway Violin
VSP VNX, VNXe NexSAN 3140, 3200
* Confirm all supported configurations at ibm.com/storage/support/2145 and click on “Support” 1500, 2000, 3000
SATABeast
18
© 2011 IBM Corporation

SVC 6.4 Enhancements

 Compression based on RACE algorithm


► Block based compression
► Real-time compression
► Saves up to 80% capacity in some environments

 Fibre Channel over Ethernet (FCoE)


► Enablesuse of converged data centre infrastructure
► Based on existing 10Gbits iSCSI hardware from 2145-CG8 nodes

 Non-Disruptive Volume Move across I/O groups (NDVM)


► Based on host multipathing concepts
 Miscellaneous changes on SVC 6.4.0

19
© 2011 IBM Corporation

Compression

► Compression is implemented
as a third type of volume
► Supports all Thin-
Provisioning features (such
as autoexpand…)
► Max. 200 volume copies per
I/O group

► GUIdisplays compression savings on a


volume, pool and system basis

20
© 2011 IBM Corporation

Compression

► GUI supports a separate CPU resource series on performance page

► Performance of a compressed volume is roughly equivalent to a thin-provisioned


volume

21
© 2011 IBM Corporation

Compression

► Which workloads to avoid ?


• Pre-compressed (such as videos, images,
audio…)
• Encrypted Extent Max Real Max TP Max
• Heavy sequential write oriented Size Volume Volume Compressed
• Compression rate <25% (MB) Capacity capacity Volume
(GB) (GB) Capacity
► Compression will generate metadata (GB)
► There are no conversion feature, however 16 2,048 2,000 1,500
moving from an uncompressed volume
32 4,096 4,000 3,100
copy to a compressed one can be done
through Vdisk Mirroring 64 8,192 8,000 6,200

► To guarantee performance, compression 128 16,384 16,000 12,400


receives 2GB of memory from SVC code 256 32,768 32,000 24,800
when compression is enabled
512 65,536 65,000 49,600
► Code stack supports two CPU
architectures: 1024 131,072 130,000 99,200
4 core (V7000/CF8) and 6 core (CG8) 2048 262,144 260,000 198,500
• On 4 core CPUs SVC will run 1 fast path core
and 3 RTC cores 4096 524,288 520,000 397,100
• On 6 core CPUs SVC will run 2 fast path
cores and 4 RTC cores 8192 1,048,57 1,040,00 794,300
6 0

22
© 2011 IBM Corporation

FCoE Host servers with 10


Gigabit iSCSI HBAs
Fibre-attached hosts

Host servers with FCoE attached hosts


Gigabit iSCSI HBAs

New

Storwize
V7000

New

SAN Storage Zones FCoE Storage Zones

External Fibre- External FCoE-


attached storage attached storage
(optional) (optional)
23
© 2011 IBM Corporation

FCoE ► SVC software provides both FCoE target and initiator


features
Intra-Cluster
► MM or GM via FCoE requires Fibre Channel
communication can
use FCoE Forwarder and Inter Switch Links, traffic can go
through FCIP/DWDM
New

Remote partnership
can use FCoE

New

FCoE support is also extended to split


I/O group architectures
New

* VLAN tagging is not supported - FCF and 10Gb/s must be on the same VLAN
24
© 2011 IBM Corporation

Non-Disruptive Volume Move across I/O groups


How it works
 (1) The volume starts being in a single I/O
Group, which is the caching I/O group, and all
I/O must be sent to nodes in that I/O group.
 A second I/O group is added to the same
volume
• Host
► Any host I/O which is sent to the second I/O group will be
• I/O
forwarded back to the cache in the first I/O group

 (2) The “caching” I/O group is switched to the


second I/O group
► Using a new command called movevdisk • 1. Multi-path • 3. Active paths
► Host I/O to the original I/O group is now forwarded to the I/O to to relocated
second I/O group Volumes Volumes
• 2. Volume
 (3) The host multipathing drivers should now Move
be reconfigured to discover the additional
paths to the volume
► Some zoning changes may also be required

 Once the multipathing drivers have discovered • SVC node • SVC node
the new paths, the original I/O group can be • SVC node • SVC node
removed, and the multipathing drivers can be
reconfigured a second time to remove the now • I/O group A • I/O group B
dead paths.

25
© 2011 IBM Corporation

Non-Disruptive Volume Move across I/O groups

26
© 2011 IBM Corporation

Non-Disruptive Volume Move across I/O groups

27
© 2011 IBM Corporation

Miscellaneous changes on SVC 6.4.0

 All node errors will now appear in the cluster event log
 New quorum scanning design to try and recover from corrupt
quorum data caused by drive faults:
► Quorum will regularly be read and validated
► Invalid quorum will ideally be moved to a new device
► If no new device available, quorum will be re-written.

 Package size increasing to about 500MB from about 340MB


 TPC stats collection for array managed disks will show a
response time in 6.3.0.2 and later
 6.4.0 will allow for support of directly-attached hosts via Fibre-
Channel via RPQ only
► Full support to be added in later release.

28
© 2011 IBM Corporation

29

You might also like