Download as pdf or txt
Download as pdf or txt
You are on page 1of 68

Virtualization concepts

© Copyright IBM Corporation 2011


Course materials may not be reproduced in whole or in part without the prior written permission of IBM. 5.0
Unit objectives
After completing this unit, you should be able to:
• Explain the virtualization definition
• Explain the storage system virtualization
• Understand the abstraction layers for disk virtualization
• Explain the benefits of virtualization

© Copyright IBM Corporation 2011


Topic 1: Introduction

© Copyright IBM Corporation 2011


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
DS8000 virtualization definition
• DS8000 physical configuration
– Disks
– Array sites
– Array
– Rank
– Extent pool
– Logical volume
– Volume group
– Host attachment

• DS8000 logical configuration


– Logical partition
– Address group
– Logical subsystem (LSS)
– Logical device
© Copyright IBM Corporation 2011
DS8000 storage hierarchy (1 of 2)
• Disks

• Array site
– A logical grouping of eight disks
• Same speed and capacity

• Array (one per array site)


– Is used to build a RAID array
RAID 5 RAID 6 RAID 10

• Rank (one per array)


– Is divided in N# fixed size extents
• CKD (3390 Mod1) or FB (1 TB) CKD FB

• Extent pool (one or more ranks)


– All extents are with the same type
• CKD or FB
Even pool has affinity Odd pool has affinity with
– Same RAID recommended with server 0 server 0
© Copyright IBM Corporation 2011
DS8000 storage hierarchy (2 of 2)
• Volumes or LUNs Even pool Odd pool

– Made up of extents from one extent pool


• Minimum size 0.1 GB or 1 cylinder
• Maximum size 2 TB (FB) or 223 GB (CKD)

X Y
• Volume group
– Contains LUNs (and host attachments)
• FB LUN masking X Y
– One volume can be member of multiple
volume groups

• Host
– Multiple server ports can be specified
• In one or more groups
– Can be member of only one volume group
© Copyright IBM Corporation 2011
Topic 2: Physical configuration concepts –
Disks

© Copyright IBM Corporation 2011


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
DDM types
DDM class Solid state Enterprise SATA disk
disk disk
Disk installation group 8 or 16 16 16

RAID types 5 5, 6, 10 6, 10

Size/RPM 73, 146, 146/15K(W) 1 TB / 7.2K(W)


300, 450, 300/15K 2 TB / 7.2K
600 450/10K/15K
600/10K/15K

Encryption-capable N.A. 146/15K(W) N.A.


size/RPM 300/15K
450/10K/15K
600/10K/15K
© Copyright IBM Corporation 2011
DDM intermix
• DS8000
– DDM install groups are installed in arrays across loops configuration on a DA pair.
• Can intermix DDM class, capacity, and rpm on a DA pair
• Ordering system does not support control of placement of DDM install group other than
by rack
– Cannot target to a specific DA pair
– Plug order within rack is determined by highest rpm first, highest capacity second.

• Nearline storage
– Is a term used in computer science to describe an intermediate type of data
storage.
– It is a compromise between:
• ONLINE STORAGE
– Constant, very rapid access to data
• OFFLINE STORAGE
– Infrequent access for backup purposes or long-term storage

© Copyright IBM Corporation 2011


Topic 3: Physical configuration concepts –
Array sites

© Copyright IBM Corporation 2011


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Array sites
• DS8000 physical configuration

© Copyright IBM Corporation 2011


Array across loops
• Array site 0 (the green disks)
uses the four left hand DDMs
in each enclosure.

• Array site 1 (the yellow disks)


uses the four right hand
DDMs in each enclosure.

• When an array is created on


each array site, half of the
array is placed on each loop.

• If the disk enclosures were


fully populated with (16)
DDMs, there would be four
array sites.

© Copyright IBM Corporation 2011


The lsddm command
dscli>
dscli> lsddm
lsddm IBM.2107-75Y9111
IBM.2107-75Y9111
Date/Time:
Date/Time: March
March 24,
24, 2011
2011 6:46:27
6:46:27 PM
PM CET
CET IBM
IBM DSCLI
DSCLI Version:
Version: 6.5.15.72
6.5.15.72 DS:
DS: IBM.2107-75Y9111
IBM.2107-75Y9111
ID
ID DA
DA Pair
Pair dkcap
dkcap (10^9B)
(10^9B) dkuse
dkuse arsite
arsite State
State
===============================================================================
===============================================================================
IBM.2107-D01-0243T/R1-P1-D1
IBM.2107-D01-0243T/R1-P1-D1 00 300.0
300.0 unconfigured
unconfigured S1
S1 Normal
Normal
IBM.2107-D01-0243T/R1-P1-D2
IBM.2107-D01-0243T/R1-P1-D2 00 300.0
300.0 unconfigured
unconfigured S3
S3 Normal
Normal
IBM.2107-D01-0243T/R1-P1-D3
IBM.2107-D01-0243T/R1-P1-D3 00 300.0
300.0 unconfigured
unconfigured S4
S4 Normal
Normal
IBM.2107-D01-0243T/R1-P1-D4
IBM.2107-D01-0243T/R1-P1-D4 00 300.0
300.0 unconfigured
unconfigured S1
S1 Normal
Normal
IBM.2107-D01-0243T/R1-P1-D5
IBM.2107-D01-0243T/R1-P1-D5 00 300.0
300.0 unconfigured
unconfigured S2
S2 Normal
Normal
IBM.2107-D01-0243T/R1-P1-D6
IBM.2107-D01-0243T/R1-P1-D6 00 300.0
300.0 unconfigured
unconfigured S2
S2 Normal
Normal
IBM.2107-D01-0243T/R1-P1-D7
IBM.2107-D01-0243T/R1-P1-D7 00 300.0
300.0 unconfigured
unconfigured S4
S4 Normal
Normal
IBM.2107-D01-0243T/R1-P1-D8
IBM.2107-D01-0243T/R1-P1-D8 00 300.0
300.0 unconfigured
unconfigured S3
S3 Normal
Normal
IBM.2107-D01-0243T/R1-P1-D9
IBM.2107-D01-0243T/R1-P1-D9 00 300.0
300.0 unconfigured
unconfigured S2
S2 Normal
Normal
IBM.2107-D01-0243T/R1-P1-D10
IBM.2107-D01-0243T/R1-P1-D10 00 300.0
300.0 unconfigured
unconfigured S4
S4 Normal
Normal
IBM.2107-D01-0243T/R1-P1-D11
IBM.2107-D01-0243T/R1-P1-D11 00 300.0
300.0 spare
spare required
required S2
S2 Normal
Normal
IBM.2107-D01-0243T/R1-P1-D12
IBM.2107-D01-0243T/R1-P1-D12 00 300.0
300.0 unconfigured
unconfigured S1
S1 Normal
Normal
IBM.2107-D01-0243T/R1-P1-D13
IBM.2107-D01-0243T/R1-P1-D13 00 300.0
300.0 spare
spare required
required S4
S4 Normal
Normal
IBM.2107-D01-0243T/R1-P1-D14
IBM.2107-D01-0243T/R1-P1-D14 00 300.0
300.0 unconfigured
unconfigured S1
S1 Normal
Normal
IBM.2107-D01-0243T/R1-P1-D15
IBM.2107-D01-0243T/R1-P1-D15 00 300.0
300.0 unconfigured
unconfigured S3
S3 Normal
Normal
IBM.2107-D01-0243T/R1-P1-D16
IBM.2107-D01-0243T/R1-P1-D16 00 300.0
300.0 unconfigured
unconfigured S3
S3 Normal
Normal
IBM.2107-D01-0251Y/R1-P1-D1
IBM.2107-D01-0251Y/R1-P1-D1 00 300.0
300.0 spare
spare required
required S1
S1 Normal
Normal
IBM.2107-D01-0251Y/R1-P1-D2
IBM.2107-D01-0251Y/R1-P1-D2 00 300.0
300.0 unconfigured
unconfigured S3
S3 Normal
Normal
IBM.2107-D01-0251Y/R1-P1-D3
IBM.2107-D01-0251Y/R1-P1-D3 00 300.0
300.0 unconfigured
unconfigured S2
S2 Normal
Normal
IBM.2107-D01-0251Y/R1-P1-D4
IBM.2107-D01-0251Y/R1-P1-D4 00 300.0
300.0 unconfigured
unconfigured S1
S1 Normal
Normal
........
........
dscli>
dscli>

© Copyright IBM Corporation 2011


Topic 4: Physical configuration concepts –
Array

© Copyright IBM Corporation 2011


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Arrays
• An array is created from one array site.
– Forming an array means defining it as a specific RAID type.
• In the DS80000 implementation:
– One array is defined using one array site.
– According to the sparing algorithm:
• From zero to two spares can be taken from the array site.

© Copyright IBM Corporation 2011


Array configuration

RAID 5 or
P S P

6+P+S 7+P
Array site

Array RAID 6 or
P Q S P Q

5+P+Q+S 6+P+Q

A B C S A B C D

RAID 10 or
A’ B’ C’ S A’ B’ C’ D’

3x2 + 2S 4x2

© Copyright IBM Corporation 2011


RAID implementation
• RAID 5
– A method of spreading volume data across multiple disk drives
– Increases performance by supporting concurrent accesses to the multiple DDMs
• Within each logical volume
– Data protection is provided by parity,
• Which is stored throughout the drives in the array
– Not compatible with the use of 1 TB FATA disk drives

• RAID 6
– A method of increasing data protection of arrays
• With volume data spread across multiple disk drives
• Increases data protection by adding an extra layer of parity over the RAID 5
– Can restore data from an array with up to 2 failed drives

• RAID 10
– Provides high availability by combining of RAID 0 and RAID 1
• RAID 0 increases performance by striping volume data across multiple disk drives
• RAID 1 provides disk mirroring which duplicates data between two disk drives
© Copyright IBM Corporation 2011
DS8000 sparing rules
• Array sites have attributes of:
– CAPACITY
– RPM
– DDM class

• A minimum of one spare is required for each array site.


– Defined until the following conditions are met:
• Spare is a Compatibility Spare for a given DDM type if it has >= capacity, >= rpm
• Spare is an Availability Spare for a given DDM type if it has >= capacity
• There are up to two compatible and two availability spares per DDM type per DA-pair
per class
• Spares allocated no quicker than one per eight DDMs of a given DDM type
• Spares are balanced between the two device interfaces
• Spare preference order: Class, rpm, capacity, and loop

• All spares are available to all array sites on that DA pair.


– Maintenance requested when 0 compatibility spares or 1 availability spare left.

Note: Order of installation will influence the number of spares.

© Copyright IBM Corporation 2011


DS8000 sparing rules: Example one
• RAID 5 with all same capacity and same RPM

• Minimum of four spares per DA pair


– Two spares per loop and two spares in each array group
– Additional RAID 5 arrays will be 7 + P
– Any added RAID 6 arrays will be 6 + P + Q
– Any added RAID 10 arrays will 4 x 2

• All spares available to all arrays on DA pair

…………………………..
1 P 3 P

DA DA
…………………………..
P 2 P 4

6+P+S 6+P+S 6+P+S 6+P+S

© Copyright IBM Corporation 2011


DS8000 sparing rules: Example two
• RAID 10 with all same capacity and same RPM

• Minimum of four spares per DA pair


– Two spares per loop and two spares in each array group
– Additional RAID 10 arrays will be 4 x 2
– Any added RAID 5 arrays will 7 + P
– Any added RAID 6 arrays will 6 + P+ Q

• All spares available to all arrays on DA pair

…………………………..
1 3

DA DA
…………………………..
2 4

3 x 2 + 2S 4x2 4X2 3 x 2 + 2S

© Copyright IBM Corporation 2011


DS8000 sparing rules: Example three
• RAID 5 with first four arrays 146 GB and next two arrays 300 GB

• Minimum of four spares per DA pair


– Two spares per loop and two spares in each array group
– Additional 146 GB RAID arrays will be 7 + P (RAID 5)

• Minimum of four spares of the largest capacity array site on a DA pair


– Next two 300 GB arrays will also be 6 + P (if RAID 5)

• All spares available to all arrays on DA pair

146 GB 146 GB 300 GB 300 GB 146 GB 146 GB

1 P 5 P 3 P

DA DA
P 2 P 6 P 4

6+P+S 6+P+S 6+P+S 6+P+S 6+P+S 6+P+S

© Copyright IBM Corporation 2011


Floating spare rules
• The DS6000 and DS8000 implement a smart floating technique for
spare DDMs.
– When a spare floats:
• This means that when a DDM fails and the data it contained is rebuilt onto a spare.
• Then the disk is replaced, and the replacement disk becomes the spare.
• The data is not copied back to the original position when the disk is replaced.
• This new disk is becoming a new spare disk.

• The DS microcode may choose to allow the hot spare:


– To remain where it has been moved.
– It may instead choose to move the spare to a more optimum position.
• This will be done to better balance the spares across the DA pairs and enclosures.
– It may be preferable that a DDM that is currently in use as an array member, be
converted to a spare.

© Copyright IBM Corporation 2011


Topic 5: Physical configuration concepts –
Rank

© Copyright IBM Corporation 2011


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Ranks
• When defined a new rank:
– Its name is chosen by the DS Storage Manager
(R1, R2, and so on).
– In the current DS8000 implementation, a rank is
built using just one array.
– The available space on each rank will be divided
into extents.

• The process of forming a rank does two


things:
– The array is formatted for either fixed block (FB)
data open systems or count key data (CKD)
System z data.
• This determine the size of the set of data contained
on one disk within a stripe on the array.
– The capacity of the array is subdivided into equal-
sized partitions, called extents.
• The extent size depends on the extent type, FB or
CKD.

© Copyright IBM Corporation 2011


Rank configuration
Array site Array Rank

FB TYPE
RAID TYPE (open systems)
Extent size of 1 GB

Sx Ax
Rx

The name is chosen by the DS Storage Manager.

Array site Array Rank

CKD TYPE
RAID TYPE (System z)
Extent size of 0.94 GB

Sx Ax
Rx

© Copyright IBM Corporation 2011


Topic 6: Physical configuration concepts –
Extent pool

© Copyright IBM Corporation 2011


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Extent pool
• A logical construction to aggregate the extent from a set of ranks:
– To form a domain for extent allocation to a logical volume
– Sets of ranks have the same RAID type and the same disk RPM characteristics:
• So that the extents in the extent pool have homogeneous characteristics

• There is no predefined affinity of ranks or arrays to a storage server.


– Affinity of the rank to a server is determined when it is assigned to an Extent Pool.
• Ranks are organized in two RANK GROUPS.
– Rank group 0 is controlled by server 0.
– Rank group 1 is controlled by server 1.

• One or more ranks can be assigned to an extent pool.


– All ranks in an Extent Pool use the same extent type (FB or CKD).
– One rank can be assigned to only one Extent Pool.
– There can be as many extent pools as there are ranks.

• The minimum number of extent pools is two:


– With one assigned to server 0 and the other to server 1
• So that the both servers are active.

© Copyright IBM Corporation 2011


Extent pool: Example in a mixed environment

© Copyright IBM Corporation 2011


Extent pool configuration
Array site Array Rank Extent pool
Extent pool

RAID TYPE FB TYPE RANK Rx

Sx Ax Rx P0 (Server 0) RANK Rx
and
OR RANK Ry
Array site Array Rank Extent pool

RAID TYPE FB TYPE RANK Ry

P1 (Server 1)
Sy Ay Ry P1 (Server 1)

Array site Array Rank Extent pool

RAID TYPE CKD TYPE

Sz Az Rz P2 (Server 0)

© Copyright IBM Corporation 2011


Topic 7: Physical configuration concepts –
Logical volume

© Copyright IBM Corporation 2011


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Logical volume: Introduction
• A logical volume is composed of a set of extents from an extent pool.
– Up to 65280 volumes can be created (CKD, FB, or a mixture of both types).

• Fixed block LUNs:


– A logical volume composed of fixed block extents is called a LUN.
• A fixed block LUN is composed of one or more 1GiB (230 B) extents from an FB extent
pool.
– A LUN cannot span multiple extent pools, but can have extents from different ranks within the
same extent pool
– You can construct LUNs up to a size of 2 TiB (240 B).

• CKD volumes:
– A System z volume is composed of one or more extents from one extent pool.
• CKD extents are of the size of 3390 model 1, which has 1113 cylinders.
• You define the number of cylinders you want for the System z CKD volume.
– The maximum size for a CKD volume was:
• 65,520 cylinders, which is about 56 GB (prior to LMC 5.4.xx.xx)
• 262,668 cylinders, which is about 223 GB (LMC 5.4.xx.xx)
– The new volume capacity is called extended address volume (EAV), device 3390 Model A.
© Copyright IBM Corporation 2011
Rotated volume allocation method
• Extents can be allocated sequentially:
– In this case all extents are taken from the same rank.
• Until you have enough extents for the requested volume size or the rank is full
– In which case the allocation continues with the next rank in the extent pool
– If more than one volume is created in one operation:
• The allocation for each volume starts in another rank
• When allocating several volumes, you ROTATE through the ranks
– Use this allocation method when you prefer to manage performance manually.

Rank A Rank B Rank C Rank D


Volume 1 Volume 4
Volume 3

Volume 2

Same Extent Pool

© Copyright IBM Corporation 2011


Storage pool striping: Extent rotation
• The preferred storage allocation method:
– Storage pool striping is an option introduced with LMC 5.3.xx.xx.
• The extents of a volume can be striped across several ranks.
– In which case the allocation continues with the next rank in the extent pool
– If more than one volume is created in one operation:
• The allocation for each volume starts in another rank.
• When allocating several volumes, you rotate through the ranks.
– Use this allocation method when you prefer to manage performance
manually.
Rank A Rank B Rank C Rank D
Volume 1 Volume 1 Volume 1 Volume 1
Volume 2 Volume 2 Volume 3 Volume 3
Volume 3 Volume 3 Volume 3 Volume 3
Volume 4 Volume 4 Volume 4 Volume 4

Same extent pool

© Copyright IBM Corporation 2011


Topic 8: Logical configuration concepts –
Logical subsystem

© Copyright IBM Corporation 2011


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Logical subsystem
• It is another logical construction.
– It groups logical volumes, LUNs, in groups of up to 256 logical volumes

• There is no fixed binding between any rank and any logical subsystem.
– The predetermined association between array and LSS is gone on the DS8000
• You can define up to 255 LSSs for the DS8000
• You can even have more LSSs than arrays

• For each LUN or CKD volume, you can choose an LSS.


– You can put up to 256 volumes into one LSS.
– There is one restriction, extent pools belong to one server (server 0 or server 1).
• LSS also have an affinity to the servers:
– All even-numbered LSSs (x’00’, x’02’, x’04’, up to x’FE’) belong to server 0
– All odd-numbered LSSs (x’01, x’03, x’05’, up to ‘x’FD’) belong to server 1
• LSS x’FF’ is reserved.

© Copyright IBM Corporation 2011


LSS: For open systems
• LSSs do not play in important role:
– Except in determining by which server the LUN is managed
– And in certain aspects related to any remote copy implementations
• With the option to put all of more of the volumes of a certain application in just one
LSS, this makes the management of remote copy operations easier

© Copyright IBM Corporation 2011


LSS: Address groups
• LSSs within one address group have to be of the same type (FB - CKD)
– The first LSS defined in an address group fixes the type of that address group.
– LSSs are grouped into address groups of 16 LSSs.
• x’ab’, where ‘a’ is the address group and ‘b’ denotes an LSS within the address group
• Address Groups are automatically:
• Created when the first LSS associated with the address group is created
• Deleted when the last LSS associated in the address group is deleted

© Copyright IBM Corporation 2011


LSS: LUN identification
• The LUN identification x’gabb’ is composed of:
– The address group x’a’ (‘g’ ??)
• FB or CKD with a maximum of 16 (0 to F)
– The LSS number x‘b’ (‘a’ ??)
• With a maximum of 16 per address group (0 to F)
• Up to 255 LSS
– The position of the LUN within the LSS x’bb’
• With a maximum of 256 per LSS (00 to FF)

• Example:
– LUN x’2101’ denotes the second (x’01’) LUN in LSS 21 (1 ??) of
address group 2

© Copyright IBM Corporation 2011


LSS: Logical volume addressing
Address Logical Logical Logical
group volume subsystem device
numbers number number
X'0' X'0000' - X'0FFF' X'00' - X'0F' X'00' - X'FF' ESCON
X'1' X'1000' - X'1FFF' X'10' - X'1F' X'00' - X'FF'
X'2' X'2000' - X'2FFF' X'20' - X'2F' X'00' - X'FF'
X'3' X'3000' - X'3FFF' X'30' - X'3F' X'00' - X'FF'
X'4' X'4000' - X'4FFF' X'40' - X'4F' X'00' - X'FF'
X'5' X'5000' - X'5FFF' X'50' - X'5F' X'00' - X'FF'
X'6' X'6000' - X'6FFF' X'60' - X'6F' X'00' - X'FF'
X'7' X'7000' - X'7FFF' X'70' - X'7F' X'00' - X'FF'
X'8' X'8000' - X'8FFF' X'80' - X'8F' X'00' - X'FF'
X'9' X'9000' - X'9FFF' X'90' - X'9F' X'00' - X'FF'
X'A' X'A000' - X'AFFF' X'A0' - X'AF' X'00' - X'FF'
X'B' X'B000' - X'BFFF' X'B0' - X'BF' X'00' - X'FF'
X'C' X'C000' - X'CFFF' X'C0' - X'CF' X'00' - X'FF'
X'D' X'D000' - X'DFFF' X'D0' - X'DF' X'00' - X'FF'
X'E' X'E000' - X'EFFF' X'E0' - X'EF' X'00' - X'FF'
X'F' X'F000' - X'FEFF' X'F0' - X'FE' X'00' - X'FF'

Note: Logical subsystem X'FF' reserved for internal storage facility use

© Copyright IBM Corporation 2011


Topic 9: Physical configuration concepts –
Volume access

© Copyright IBM Corporation 2011


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Volume access: Introduction
• A DS8000 provides mechanisms to control host access to
LUNs.

• In most cases:
– A server has two or more HBAs
– And the server needs access to a group of LUNs

• For easy management of server access to logical volumes:


– The DS8000 introduced the concept of host attachments and volume
groups

© Copyright IBM Corporation 2011


Device configuration overview
Devices in LSS can be from Extent pool can contain
multiple extent pools devices from multiple LSS

Address group CKD address group FB Address group

LSS 00 02 10 12

Logical devices

Extent pools CKD FB CKD

LSS group 0

Device extents taken from single extent FB and CKD devices use
pool different extent pools

© Copyright IBM Corporation 2011


Volume access: Host attachment
• Host bus adapters (HBAs):
– Are identified to the DS8000 in a host attachment construct
• That specifies the HBAs’ World Wide Port Names (WWPNs)
– A set of host ports can be associated through a port group attribute
• That allows a set of HBAs to be managed collectively
• This port group is referred to a host attachment within the GUI

• Each host attachment can be associated with a volume group:


– To define which LUNs that HBA is allowed to access

• Multiple host attachments can share the same volume group.

• The maximum number of host attachments on a DS8000 is


8192.
© Copyright IBM Corporation 2011
Volume access: Volume group (1 of 3)
• A named construct that defines a set of logical volumes:
– When used in conjunction with Open Systems hosts:
• A host attachment object that identifies the HBA is linked to a specific volume group.
• You must define the volume group by indicating:
– Which fixed block logical volumes are to be placed in the volume group
• Logical volumes can be added to or removed from any volume group dynamically
– When used in conjunction with CKD hosts:
• There is a default volume group that contains all CKD volumes, and:
– Any CKD host that logs in to a FICON I/O port has access to the volumes in this volume
group.
– CKD logical volumes are automatically added to this volume when they are created.

• FB logical volumes can be defined in one or more volume groups.


– This allows a LUN to be shared by host HBAs configured to different volume
groups.
– An FB logical volume is automatically removed from all volume groups when it is
deleted .

• For the DS8000, the maximum number of volume groups is 8320.


© Copyright IBM Corporation 2011
Volume access: Volume group (2 of 3)
• FB LUN masking contains one or more LUNs and one or more host
attachments:
– A specific host attachment can be in only one volume group
– Attachments from multiple hosts (even different open systems host types) are
allowed in the same volume group
– Recommended: One host per volume group with shared LUNs in multiple volume
groups
– Specific LUN can be in more than one volume group
• Allows LUN sharing (such as LUN1 in example below)

Volume group 1 Volume group 2

System p1 System x1

1 1

© Copyright IBM Corporation 2011


Volume access: Volume group (3 of 3)
• Volumes from different LSSs and different extent pools:
– Can be in one volume group (Volume group 1 below)

• Volumes from same LSS and/or same extent pool:


– Can be in different volume groups (7512, 7515)

Extent pool 4 Extent pool 5

2 2 7
2 3 5
1
Volume
1
group
1 2
0 3 5
7
Volume group 1 7
4 5
0 1
1 2

© Copyright IBM Corporation 2011


Host attachments and volume groups (1 of 4)
• Specific host attachment can be in only one volume group

Host attachment Host attachment


Server A Server A

5
4 6
Volume group X Volume group Y Volume group X

1 3 4 6 1 3
2 5 2

© Copyright IBM Corporation 2011


Host attachments and volume groups (2 of 4)
• Options for shared access:
1. Place hosts in separate volume groups and shared volumes in
multiple volume groups
2. Place shared volumes and multiple hosts in a single volume group

Host attachment Host attachment Host attachment Host attachment


Server A Server B Server A Server B

Volume group X Volume group Y Volume group X


Or
1 3 1 3 1 3
2 2 2

© Copyright IBM Corporation 2011


Host attachments and volume groups (3 of 4)

Host System A Host System B Host System C


Host Attachm ent Host Attachm ent Host Attachm ent Host Attachm ent
WWPN W W PN WWPN WWPN WWPN WWPN WWPN WWPN

It is possible to ha ve
several Hos t Attachments
associated to one V olume
Group .
B ut we do recomme nd,
1 4 5 7 For manageme nt
to associate only o ne Host
attachment to eac h V olume
Gro up

2 3 6 8

V olu me Group 1 V o lu me Group 2 V o lume Group 3

© Copyright IBM Corporation 2011


Host attachments and volume groups (4 of 4)

© Copyright IBM Corporation 2011


Topic 10: Physical configuration concepts –
Host attachment

© Copyright IBM Corporation 2011


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
DS8000 dual path host

© Copyright IBM Corporation 2011


DS8000 extent pools mapping example
(DS8100 and DS8300)

HA

RIO-G Module RIO-G Module

L1,2
Memory L1,2
Memory Processor
Processor Memory Memory

L1,2 L1,2
L3 L3
Memory
Memory Processor Processor Memory
Memory

Server0 Server1
RIO-G Module RIO-G Module
POWER5 2-way SMP POWER5 2-way SMP

DAs
Rank Extent pool0 Extent pool2 Extent pool4 Extent pool1 Extent pool3 Extent pool5

1 3 5 2 4 6

7 9 11 8 10 12

Extent pool6 Extent pool8 Extent pool10 Extent pool7 Extent pool9 Extent pool11

©CopyrightIB
MCorporation2011
DS8000 extent pools mapping example
(DS8700 and DS8800)

HA

PCIe I/O Module PCIe I/O Module

L1,2
Memory L1,2
Memory Processor
Processor Memory Memory

L1,2 L1,2
L3 L3
Memory
Memory Processor Processor Memory
Memory

Server0 Server1
RIO-G Module RIO-G Module
POWER5 2-way SMP POWER5 2-way SMP

DAs
Rank Extent pool0 Extent pool2 Extent pool4 Extent pool1 Extent pool3 Extent pool5

1 3 5 2 4 6

7 9 11 8 10 12

Extent pool6 Extent pool8 Extent pool10 Extent pool7 Extent pool9 Extent pool11

© Copyright IBM Corporation 2011


DS8000 open dual port host attachment
(DS8100 and DS8300)
IBM

Reads

HAs don't have DS8000 server affinity


LUN1 Load balancing
HA FC0 FC1
I/Os I/Os

L1,2
L1,2
Memory L1,2
L1,2
Memory Memory Processor
Processor
Processor Memory
Processor Memory
Memory
SERVER 0
SERVER 1
L3
L3
L1,2
L1,2 RIO-2 Interconnect L1,2
L1,2
Memory
Memory
Memory Processor
Processor L3
L3
Memory Processor
Processor Memory
Memory Memory
Memory

RIO-G Module Extent pool 1


20 port Extent pool 4
switch RIO-G Module
16 DDM
LUN1 ooo

20 port switch

DA 20 port switch DA

DAs have an affinity to server 0 LUN1 ooo DAs have an affinity to server 1
16 DDM

20 port switch

Extent pool 1 oooo Extent pool 4


controlled by server 0 controlled by server 1

© Copyright IBM Corporation 2011


DS8000 open dual port host attachment
(DS8700 and DS8800)
IBM

Reads
HAs do not have DS8000 server affinity

LUN1 Load balancing


HA I/Os FC0 FC1
I/Os

L1,2
L1,2
Memory L1,2
L1,2
Memory Memory Processor
Processor Memory
Processor
Processor Memory
Memory
SERVER 0
SERVER 1
L1,2
L1,2
L3
L3 L1,2
L1,2
Memory
Memory
Memory
Memory Processor
Processor L3
L3
Processor
Processor Memory
Memory Memory
Memory

RIO-G Module Extent pool


20 1port switch
Extent pool 4
RIO-G Module
16 DDM

LUN1 ooo

20 port switch

DA
DA 20 port switch
DAs have an affinity to server 0 LUN1 ooo DAs have an affinity to server 1
16 DDM

20 port switch

Extent pool 1 oooo Extent pool 4


controlled by server 0 controlled by server 1

© Copyright IBM Corporation 2011


Volumes from the host point of view
C:\>
C:\> Program
Program Files\IBM\SDDDSM>
Files\IBM\SDDDSM> datapath
datapath query
query device
device

Total
Total Devices:
Devices: 22

DEV#:
DEV#: 00 DEVICE
DEVICE NAME:
NAME: Disk2
Disk2 Part0
Part0 TYPE:
TYPE: 2107900
2107900 POLICY:
POLICY: OPTIMIZED
OPTIMIZED SERIAL:75BV321E121
SERIAL:75BV321E121
==========================================================================================
==========================================================================================
Path#
Path# Adapter/Hard
Adapter/Hard Disk
Disk State
State Mode
Mode Select
Select Errors
Errors
00 Scsi
Scsi Port1
Port1 Bus0/Disk2
Bus0/Disk2 Part0
Part0 OPEN
OPEN NORMAL
NORMAL 203
203 33
11 Scsi
Scsi Port2
Port2 Bus0/Disk2
Bus0/Disk2 Part0
Part0 OPEN
OPEN NORMAL
NORMAL 173
173 11

DEV#:
DEV#: 11 DEVICE
DEVICE NAME:
NAME: Disk3
Disk3 Part0
Part0 TYPE:
TYPE: 2107900
2107900 POLICY:
POLICY: OPTIMIZED
OPTIMIZED SERIAL:75BV321E122
SERIAL:75BV321E122
==========================================================================================
==========================================================================================
Path#
Path# Adapter/Hard
Adapter/Hard Disk
Disk State
State Mode
Mode Select
Select Errors
Errors
00 Scsi Port1 Bus0/Disk3 Part0
Scsi Port1 Bus0/Disk3 Part0 OPEN
OPEN NORMAL
NORMAL 182
182 00
11 Scsi
Scsi Port2
Port2 Bus0/Disk3
Bus0/Disk3 Part0
Part0 OPEN
OPEN NORMAL
NORMAL 156
156 00

© Copyright IBM Corporation 2011


Topic 11: Virtualization concepts – Logical
partitions (before DS8700)

© Copyright IBM Corporation 2011


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Logical partitions: LPAR is described in SFI

© Copyright IBM Corporation 2011


LPAR resources
• Each DS8000 storage unit has a unique serial number.
– IBM.2107-7512340
• If the storage unit is a 9A2, then its physical resources are divided into
two LPARs.
• Each LPAR gets:
– One of the 2-way p570 processors
– One RIO-G loop
– Two I/O bays in each of the 9A2 and 9AE frames:
• LPAR 1 gets I/O bays 0-1 and 4-5 and disk storage attached to those DAs
• LPAR 2 gets I/O bays 2-3 and 6-7 and disk storage attached to those DAs
• Memory is divided by the S-HMC hypervisor 50-50 to support the
hardware in each LPAR.
• Each LPAR has a pair of internal mirrored disks for code and pinned
data offload.
• Each LPAR is loaded independently with DS storage server code.
• The DS storage server is called a storage facility image.
– Each storage facility image has a unique serial number:
• IBM.2107-7512341: LPAR 1
• IBM.2107-7512342: LPAR 2
© Copyright IBM Corporation 2011
Accessing the LPAR
dscli>
dscli> lssi
lssi
Date/Time:
Date/Time: June
June 01,
01, 2009
2009 15:40:43
15:40:43 CEST
CEST IBM
IBM DSCLI
DSCLI Version:
Version: 5.4.1.81
5.4.1.81
Name
Name ID
ID Storage
Storage Unit
Unit Model
Model WWNN
WWNN
=====================================================================================
=====================================================================================
Storage
Storage Facility
Facility Image
Image 41
41 IBM.2107-7512341
IBM.2107-7512341 IBM.2107-7512340
IBM.2107-7512340 9A2
9A2 500507633C472F
500507633C472F
Storage
Storage Facility
Facility Image
Image 42
42 IBM.2107-7512342
IBM.2107-7512342 IBM.2107-7512340
IBM.2107-7512340 9A2
9A2 500507633C4F2F
500507633C4F2F
dscli>
dscli>

dscli>
dscli> lsarray
lsarray –dev
–dev IBM.2107-7512341
IBM.2107-7512341
Date/Time:
Date/Time: June
June 01,
01, 2009
2009 15:42:32
15:42:32 CEST
CEST IBM
IBM DSCLI
DSCLI Version:
Version: 5.4.1.81
5.4.1.81 DS:
DS: IBM.2107-7512340
IBM.2107-7512340
Array
Array State
State Data
Data RAIDtype
RAIDtype arsite
arsite Rank
Rank DA
DA Pair
Pair DDMcap
DDMcap (10^9B)
(10^9B)
=====================================================================================
=====================================================================================
A0
A0 Assigned
Assigned Normal
Normal 55 (6+P+S)
(6+P+S) S1
S1 R0
R0 22 146.0
146.0
A1
A1 Assigned
Assigned Normal
Normal 55 (6+P+S)
(6+P+S) S2
S2 R1
R1 22 146.0
146.0
A2
A2 Assigned
Assigned Normal
Normal 55 (6+P+S)
(6+P+S) S3
S3 R2
R2 22 146.0
146.0
A3
A3 Assigned
Assigned Normal
Normal 55 (6+P+S)
(6+P+S) S4
S4 R3
R3 22 146.0
146.0
dscli>
dscli>

dscli>
dscli> lsarray
lsarray –dev
–dev IBM.2107-7512342
IBM.2107-7512342
Date/Time:
Date/Time: June
June 01,
01, 2009
2009 15:42:32
15:42:32 CEST
CEST IBM
IBM DSCLI
DSCLI Version:
Version: 5.4.1.81
5.4.1.81 DS:
DS: IBM.2107-7512340
IBM.2107-7512340
Array
Array State
State Data
Data RAIDtype
RAIDtype arsite
arsite Rank
Rank DA
DA Pair
Pair DDMcap
DDMcap (10^9B)
(10^9B)
=====================================================================================
=====================================================================================
A0
A0 Assigned
Assigned Normal
Normal 10
10 (3+3+2S)
(3+3+2S) S1S1 R0
R0 00 300.0
300.0
A1
A1 Assigned
Assigned Normal
Normal 10
10 (4+4)
(4+4) S2
S2 R1
R1 00 300.0
300.0
A2
A2 Assigned
Assigned Normal
Normal 10
10 (4+4)
(4+4) S3
S3 R2
R2 00 300.0
300.0
A3
A3 Assigned
Assigned Normal
Normal 10
10 (4+4)
(4+4) S4
S4 R3
R3 00 300.0
300.0
dscli>
dscli>
© Copyright IBM Corporation 2011
Topic 12: Virtualization concepts –
Summary of the virtualization

© Copyright IBM Corporation 2011


Course materials may not be reproduced in whole or in part without the prior written permission of IBM.
Main benefits of the virtualization layers (1 of 2)
• Flexible LSS definition:
– Allows maximization and optimization of the number of devices per LSS

• No strict relationship between RAID ranks and LSSs

• No connection of LSS performance to underlying storage

• Number of LSSs:
– Can be defined based upon device number requirements:
• With larger devices, significantly fewer LSSs might be used.
• Volumes for a particular application can be kept in a single LSS.
• Smaller LSSs can be defined if required (for applications requiring less storage).
• Test systems can have their own LSSs with fewer volumes than production systems.

• Virtualization reduces storage management requirements.

© Copyright IBM Corporation 2011


Main benefits of the virtualization layers (2 of 2)
• Increased number of logical volumes
– Up to 65280 (CKD)
– Up to 65280 (FB)
– 65280 total for CKD + FB

• Any mixture of CKD or FB addresses in 4096 address groups

• Increased logical volume size


– CKD: 223 GB (262,668 cylinders), architected for 219 TB
– FB: 2 TB, architected for 1 PB

• Flexible logical volume configuration


– Multiple RAID types (RAID 5, RAID 6, and RAID 10)
– Storage types (CKD and FB) aggregated into extent pools
– Volumes allocated from extents of extent pool
– Storage pool striping
– Dynamically add, expand, remove volumes

© Copyright IBM Corporation 2011


Summarizes the virtualization hierarchy

© Copyright IBM Corporation 2011


Checkpoint
1. The grouping of disks for the DS8000 storage subsystem usage is
called an:
a. Array
b. Disk group
c. Array site
d. Volume group

2. True or False: The DS8000 places each array into a rank to define the
format of the array.

3. True or False: The extent pool defines the RAID type which is used in
the array.

4. True or False: The minimum number of extent pools to define in the


DS8000 or DS6000 is two, or one for each server.

© Copyright IBM Corporation 2011


Checkpoint solutions
1. The grouping of disks for the DS8000 storage subsystem usage is called an:
a. Array
b. Disk group
c. Array site
d. Volume group
The answer is array site. The disk groups are called array sites in the DS8000.
2. True or False: The DS8000 places each array into a rank to define the format of
the array.
The answer is true. The rank defines the disk format of CKD or FB.

3. True of False: The extent pool defines the RAID type which is used in the array.
The answer is false. The extent pool defines the server affinity and pools the ranks
into groups for logical volume definition.
4. True or False: The minimum number of extent pools to define in the DS8000 or
DS6000 is two, or one for each server.
The answer is true. Two extent pools is the minimum number to define to utilize
each of the p5 servers in the DS8000.

© Copyright IBM Corporation 2011


Unit summary
Having completed this unit, you should be able to:
• Understand the virtualization definition
• Explain the storage system virtualization
• Understand the abstraction layers for disk virtualization
• Explain the benefits of virtualization

© Copyright IBM Corporation 2011

You might also like